llms-py 2.0.24__py3-none-any.whl → 2.0.26__py3-none-any.whl
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- llms/llms.json +10 -1
- llms/main.py +361 -87
- llms/ui/App.mjs +7 -2
- llms/ui/Avatar.mjs +61 -4
- llms/ui/Main.mjs +8 -5
- llms/ui/OAuthSignIn.mjs +92 -0
- llms/ui/ai.mjs +68 -5
- llms/ui/app.css +36 -0
- {llms_py-2.0.24.dist-info → llms_py-2.0.26.dist-info}/METADATA +343 -42
- {llms_py-2.0.24.dist-info → llms_py-2.0.26.dist-info}/RECORD +14 -13
- {llms_py-2.0.24.dist-info → llms_py-2.0.26.dist-info}/licenses/LICENSE +1 -2
- {llms_py-2.0.24.dist-info → llms_py-2.0.26.dist-info}/WHEEL +0 -0
- {llms_py-2.0.24.dist-info → llms_py-2.0.26.dist-info}/entry_points.txt +0 -0
- {llms_py-2.0.24.dist-info → llms_py-2.0.26.dist-info}/top_level.txt +0 -0
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
Metadata-Version: 2.4
|
|
2
2
|
Name: llms-py
|
|
3
|
-
Version: 2.0.
|
|
3
|
+
Version: 2.0.26
|
|
4
4
|
Summary: A lightweight CLI tool and OpenAI-compatible server for querying multiple Large Language Model (LLM) providers
|
|
5
5
|
Home-page: https://github.com/ServiceStack/llms
|
|
6
6
|
Author: ServiceStack
|
|
@@ -50,7 +50,7 @@ Configure additional providers and models in [llms.json](llms/llms.json)
|
|
|
50
50
|
|
|
51
51
|
## Features
|
|
52
52
|
|
|
53
|
-
- **Lightweight**: Single [llms.py](llms.py) Python file with single `aiohttp` dependency
|
|
53
|
+
- **Lightweight**: Single [llms.py](https://github.com/ServiceStack/llms/blob/main/llms/main.py) Python file with single `aiohttp` dependency
|
|
54
54
|
- **Multi-Provider Support**: OpenRouter, Ollama, Anthropic, Google, OpenAI, Grok, Groq, Qwen, Z.ai, Mistral
|
|
55
55
|
- **OpenAI-Compatible API**: Works with any client that supports OpenAI's chat completion API
|
|
56
56
|
- **Built-in Analytics**: Built-in analytics UI to visualize costs, requests, and token usage
|
|
@@ -68,24 +68,51 @@ Configure additional providers and models in [llms.json](llms/llms.json)
|
|
|
68
68
|
|
|
69
69
|
Access all your local all remote LLMs with a single ChatGPT-like UI:
|
|
70
70
|
|
|
71
|
-
[](https://servicestack.net/posts/llms-py-ui)
|
|
71
|
+
[](https://servicestack.net/posts/llms-py-ui)
|
|
72
72
|
|
|
73
73
|
**Monthly Costs Analysis**
|
|
74
74
|
|
|
75
75
|
[](https://servicestack.net/posts/llms-py-ui)
|
|
76
76
|
|
|
77
|
+
**Monthly Token Usage**
|
|
78
|
+
|
|
79
|
+
[](https://servicestack.net/posts/llms-py-ui)
|
|
80
|
+
|
|
77
81
|
**Monthly Activity Log**
|
|
78
82
|
|
|
79
83
|
[](https://servicestack.net/posts/llms-py-ui)
|
|
80
84
|
|
|
81
85
|
[More Features and Screenshots](https://servicestack.net/posts/llms-py-ui).
|
|
82
86
|
|
|
87
|
+
**Check Provider Reliability and Response Times**
|
|
88
|
+
|
|
89
|
+
Check the status of configured providers to test if they're configured correctly, reachable and what their response times is for the simplest `1+1=` request:
|
|
90
|
+
|
|
91
|
+
```bash
|
|
92
|
+
# Check all models for a provider:
|
|
93
|
+
llms --check groq
|
|
94
|
+
|
|
95
|
+
# Check specific models for a provider:
|
|
96
|
+
llms --check groq kimi-k2 llama4:400b gpt-oss:120b
|
|
97
|
+
```
|
|
98
|
+
|
|
99
|
+
[](https://servicestack.net/img/posts/llms-py-ui/llms-check.webp)
|
|
100
|
+
|
|
101
|
+
As they're a good indicator for the reliability and speed you can expect from different providers we've created a
|
|
102
|
+
[test-providers.yml](https://github.com/ServiceStack/llms/actions/workflows/test-providers.yml) GitHub Action to
|
|
103
|
+
test the response times for all configured providers and models, the results of which will be frequently published to
|
|
104
|
+
[/checks/latest.txt](https://github.com/ServiceStack/llms/blob/main/docs/checks/latest.txt)
|
|
105
|
+
|
|
83
106
|
## Installation
|
|
84
107
|
|
|
108
|
+
### Using pip
|
|
109
|
+
|
|
85
110
|
```bash
|
|
86
111
|
pip install llms-py
|
|
87
112
|
```
|
|
88
113
|
|
|
114
|
+
- [Using Docker](#using-docker)
|
|
115
|
+
|
|
89
116
|
## Quick Start
|
|
90
117
|
|
|
91
118
|
### 1. Set API Keys
|
|
@@ -112,34 +139,99 @@ export OPENROUTER_API_KEY="..."
|
|
|
112
139
|
| z.ai | `ZAI_API_KEY` | Z.ai API key | `sk-...` |
|
|
113
140
|
| mistral | `MISTRAL_API_KEY` | Mistral API key | `...` |
|
|
114
141
|
|
|
115
|
-
### 2.
|
|
142
|
+
### 2. Run Server
|
|
143
|
+
|
|
144
|
+
Start the UI and an OpenAI compatible API on port **8000**:
|
|
145
|
+
|
|
146
|
+
```bash
|
|
147
|
+
llms --serve 8000
|
|
148
|
+
```
|
|
149
|
+
|
|
150
|
+
Launches UI at `http://localhost:8000` and OpenAI Endpoint at `http://localhost:8000/v1/chat/completions`.
|
|
151
|
+
|
|
152
|
+
To see detailed request/response logging, add `--verbose`:
|
|
153
|
+
|
|
154
|
+
```bash
|
|
155
|
+
llms --serve 8000 --verbose
|
|
156
|
+
```
|
|
157
|
+
|
|
158
|
+
### Use llms.py CLI
|
|
159
|
+
|
|
160
|
+
```bash
|
|
161
|
+
llms "What is the capital of France?"
|
|
162
|
+
```
|
|
163
|
+
|
|
164
|
+
### Enable Providers
|
|
165
|
+
|
|
166
|
+
Any providers that have their API Keys set and enabled in `llms.json` are automatically made available.
|
|
116
167
|
|
|
117
|
-
|
|
168
|
+
Providers can be enabled or disabled in the UI at runtime next to the model selector, or on the command line:
|
|
118
169
|
|
|
119
170
|
```bash
|
|
120
|
-
#
|
|
121
|
-
llms --
|
|
171
|
+
# Disable free providers with free models and free tiers
|
|
172
|
+
llms --disable openrouter_free codestral google_free groq
|
|
122
173
|
|
|
123
174
|
# Enable paid providers
|
|
124
|
-
llms --enable openrouter anthropic google openai
|
|
175
|
+
llms --enable openrouter anthropic google openai grok z.ai qwen mistral
|
|
125
176
|
```
|
|
126
177
|
|
|
127
|
-
|
|
178
|
+
## Using Docker
|
|
128
179
|
|
|
129
|
-
|
|
180
|
+
#### a) Simple - Run in a Docker container:
|
|
181
|
+
|
|
182
|
+
Run the server on port `8000`:
|
|
130
183
|
|
|
131
184
|
```bash
|
|
132
|
-
|
|
185
|
+
docker run -p 8000:8000 -e GROQ_API_KEY=$GROQ_API_KEY ghcr.io/servicestack/llms:latest
|
|
133
186
|
```
|
|
134
187
|
|
|
135
|
-
|
|
188
|
+
Get the latest version:
|
|
189
|
+
|
|
190
|
+
```bash
|
|
191
|
+
docker pull ghcr.io/servicestack/llms:latest
|
|
192
|
+
```
|
|
136
193
|
|
|
137
|
-
|
|
194
|
+
Use custom `llms.json` and `ui.json` config files outside of the container (auto created if they don't exist):
|
|
138
195
|
|
|
139
196
|
```bash
|
|
140
|
-
|
|
197
|
+
docker run -p 8000:8000 -e GROQ_API_KEY=$GROQ_API_KEY \
|
|
198
|
+
-v ~/.llms:/home/llms/.llms \
|
|
199
|
+
ghcr.io/servicestack/llms:latest
|
|
200
|
+
```
|
|
201
|
+
|
|
202
|
+
#### b) Recommended - Use Docker Compose:
|
|
203
|
+
|
|
204
|
+
Download and use [docker-compose.yml](https://raw.githubusercontent.com/ServiceStack/llms/refs/heads/main/docker-compose.yml):
|
|
205
|
+
|
|
206
|
+
```bash
|
|
207
|
+
curl -O https://raw.githubusercontent.com/ServiceStack/llms/refs/heads/main/docker-compose.yml
|
|
208
|
+
```
|
|
209
|
+
|
|
210
|
+
Update API Keys in `docker-compose.yml` then start the server:
|
|
211
|
+
|
|
212
|
+
```bash
|
|
213
|
+
docker-compose up -d
|
|
141
214
|
```
|
|
142
215
|
|
|
216
|
+
#### c) Build and run local Docker image from source:
|
|
217
|
+
|
|
218
|
+
```bash
|
|
219
|
+
git clone https://github.com/ServiceStack/llms
|
|
220
|
+
|
|
221
|
+
docker-compose -f docker-compose.local.yml up -d --build
|
|
222
|
+
```
|
|
223
|
+
|
|
224
|
+
After the container starts, you can access the UI and API at `http://localhost:8000`.
|
|
225
|
+
|
|
226
|
+
|
|
227
|
+
See [DOCKER.md](DOCKER.md) for detailed instructions on customizing configuration files.
|
|
228
|
+
|
|
229
|
+
## GitHub OAuth Authentication
|
|
230
|
+
|
|
231
|
+
llms.py supports optional GitHub OAuth authentication to secure your web UI and API endpoints. When enabled, users must sign in with their GitHub account before accessing the application.
|
|
232
|
+
|
|
233
|
+
See [GITHUB_OAUTH_SETUP.md](GITHUB_OAUTH_SETUP.md) for detailed setup instructions.
|
|
234
|
+
|
|
143
235
|
## Configuration
|
|
144
236
|
|
|
145
237
|
The configuration file [llms.json](llms/llms.json) is saved to `~/.llms/llms.json` and defines available providers, models, and default settings. Key sections:
|
|
@@ -147,6 +239,10 @@ The configuration file [llms.json](llms/llms.json) is saved to `~/.llms/llms.jso
|
|
|
147
239
|
### Defaults
|
|
148
240
|
- `headers`: Common HTTP headers for all requests
|
|
149
241
|
- `text`: Default chat completion request template for text prompts
|
|
242
|
+
- `image`: Default chat completion request template for image prompts
|
|
243
|
+
- `audio`: Default chat completion request template for audio prompts
|
|
244
|
+
- `file`: Default chat completion request template for file prompts
|
|
245
|
+
- `check`: Check request template for testing provider connectivity
|
|
150
246
|
|
|
151
247
|
### Providers
|
|
152
248
|
|
|
@@ -156,7 +252,9 @@ Each provider configuration includes:
|
|
|
156
252
|
- `api_key`: API key (supports environment variables with `$VAR_NAME`)
|
|
157
253
|
- `base_url`: API endpoint URL
|
|
158
254
|
- `models`: Model name mappings (local name → provider name)
|
|
159
|
-
|
|
255
|
+
- `pricing`: Pricing per token (input/output) for each model
|
|
256
|
+
- `default_pricing`: Default pricing if not specified in `pricing`
|
|
257
|
+
- `check`: Check request template for testing provider connectivity
|
|
160
258
|
|
|
161
259
|
## Command Line Usage
|
|
162
260
|
|
|
@@ -498,9 +596,6 @@ llms --verbose --logprefix "[DEBUG] " "Hello world"
|
|
|
498
596
|
# Set default model (updates config file)
|
|
499
597
|
llms --default grok-4
|
|
500
598
|
|
|
501
|
-
# Update llms.py to latest version
|
|
502
|
-
llms --update
|
|
503
|
-
|
|
504
599
|
# Pass custom parameters to chat request (URL-encoded)
|
|
505
600
|
llms --args "temperature=0.7&seed=111" "What is 2+2?"
|
|
506
601
|
|
|
@@ -570,19 +665,10 @@ When you set a default model:
|
|
|
570
665
|
|
|
571
666
|
### Updating llms.py
|
|
572
667
|
|
|
573
|
-
The `--update` option downloads and installs the latest version of `llms.py` from the GitHub repository:
|
|
574
|
-
|
|
575
668
|
```bash
|
|
576
|
-
|
|
577
|
-
llms --update
|
|
669
|
+
pip install llms-py --upgrade
|
|
578
670
|
```
|
|
579
671
|
|
|
580
|
-
This command:
|
|
581
|
-
- Downloads the latest `llms.py` from `github.com/ServiceStack/llms/blob/main/llms/main.py`
|
|
582
|
-
- Overwrites your current `llms.py` file with the latest version
|
|
583
|
-
- Preserves your existing configuration file (`llms.json`)
|
|
584
|
-
- Requires an internet connection to download the update
|
|
585
|
-
|
|
586
672
|
### Beautiful rendered Markdown
|
|
587
673
|
|
|
588
674
|
Pipe Markdown output to [glow](https://github.com/charmbracelet/glow) to beautifully render it in the terminal:
|
|
@@ -818,35 +904,249 @@ Example: If both OpenAI and OpenRouter support `kimi-k2`, the request will first
|
|
|
818
904
|
|
|
819
905
|
## Usage
|
|
820
906
|
|
|
821
|
-
|
|
822
|
-
|
|
823
|
-
|
|
824
|
-
[--file FILE] [--raw] [--list] [--serve PORT] [--enable PROVIDER] [--disable PROVIDER]
|
|
825
|
-
[--default MODEL] [--init] [--logprefix PREFIX] [--verbose] [--update]
|
|
907
|
+
usage: llms [-h] [--config FILE] [-m MODEL] [--chat REQUEST] [-s PROMPT] [--image IMAGE] [--audio AUDIO] [--file FILE]
|
|
908
|
+
[--args PARAMS] [--raw] [--list] [--check PROVIDER] [--serve PORT] [--enable PROVIDER] [--disable PROVIDER]
|
|
909
|
+
[--default MODEL] [--init] [--root PATH] [--logprefix PREFIX] [--verbose]
|
|
826
910
|
|
|
827
|
-
llms
|
|
911
|
+
llms v2.0.24
|
|
828
912
|
|
|
829
913
|
options:
|
|
830
914
|
-h, --help show this help message and exit
|
|
831
915
|
--config FILE Path to config file
|
|
832
|
-
-m
|
|
833
|
-
Model to use
|
|
916
|
+
-m, --model MODEL Model to use
|
|
834
917
|
--chat REQUEST OpenAI Chat Completion Request to send
|
|
835
|
-
-s
|
|
836
|
-
System prompt to use for chat completion
|
|
918
|
+
-s, --system PROMPT System prompt to use for chat completion
|
|
837
919
|
--image IMAGE Image input to use in chat completion
|
|
838
920
|
--audio AUDIO Audio input to use in chat completion
|
|
839
921
|
--file FILE File input to use in chat completion
|
|
922
|
+
--args PARAMS URL-encoded parameters to add to chat request (e.g. "temperature=0.7&seed=111")
|
|
840
923
|
--raw Return raw AI JSON response
|
|
841
924
|
--list Show list of enabled providers and their models (alias ls provider?)
|
|
925
|
+
--check PROVIDER Check validity of models for a provider
|
|
842
926
|
--serve PORT Port to start an OpenAI Chat compatible server on
|
|
843
927
|
--enable PROVIDER Enable a provider
|
|
844
928
|
--disable PROVIDER Disable a provider
|
|
845
929
|
--default MODEL Configure the default model to use
|
|
846
930
|
--init Create a default llms.json
|
|
931
|
+
--root PATH Change root directory for UI files
|
|
847
932
|
--logprefix PREFIX Prefix used in log messages
|
|
848
933
|
--verbose Verbose output
|
|
849
|
-
|
|
934
|
+
|
|
935
|
+
## Docker Deployment
|
|
936
|
+
|
|
937
|
+
### Quick Start with Docker
|
|
938
|
+
|
|
939
|
+
The easiest way to run llms-py is using Docker:
|
|
940
|
+
|
|
941
|
+
```bash
|
|
942
|
+
# Using docker-compose (recommended)
|
|
943
|
+
docker-compose up -d
|
|
944
|
+
|
|
945
|
+
# Or pull and run directly
|
|
946
|
+
docker run -p 8000:8000 \
|
|
947
|
+
-e OPENROUTER_API_KEY="your-key" \
|
|
948
|
+
ghcr.io/servicestack/llms:latest
|
|
949
|
+
```
|
|
950
|
+
|
|
951
|
+
### Docker Images
|
|
952
|
+
|
|
953
|
+
Pre-built Docker images are automatically published to GitHub Container Registry:
|
|
954
|
+
|
|
955
|
+
- **Latest stable**: `ghcr.io/servicestack/llms:latest`
|
|
956
|
+
- **Specific version**: `ghcr.io/servicestack/llms:v2.0.24`
|
|
957
|
+
- **Main branch**: `ghcr.io/servicestack/llms:main`
|
|
958
|
+
|
|
959
|
+
### Environment Variables
|
|
960
|
+
|
|
961
|
+
Pass API keys as environment variables:
|
|
962
|
+
|
|
963
|
+
```bash
|
|
964
|
+
docker run -p 8000:8000 \
|
|
965
|
+
-e OPENROUTER_API_KEY="sk-or-..." \
|
|
966
|
+
-e GROQ_API_KEY="gsk_..." \
|
|
967
|
+
-e GOOGLE_FREE_API_KEY="AIza..." \
|
|
968
|
+
-e ANTHROPIC_API_KEY="sk-ant-..." \
|
|
969
|
+
-e OPENAI_API_KEY="sk-..." \
|
|
970
|
+
ghcr.io/servicestack/llms:latest
|
|
971
|
+
```
|
|
972
|
+
|
|
973
|
+
### Using docker-compose
|
|
974
|
+
|
|
975
|
+
Create a `docker-compose.yml` file (or use the one in the repository):
|
|
976
|
+
|
|
977
|
+
```yaml
|
|
978
|
+
version: '3.8'
|
|
979
|
+
|
|
980
|
+
services:
|
|
981
|
+
llms:
|
|
982
|
+
image: ghcr.io/servicestack/llms:latest
|
|
983
|
+
ports:
|
|
984
|
+
- "8000:8000"
|
|
985
|
+
environment:
|
|
986
|
+
- OPENROUTER_API_KEY=${OPENROUTER_API_KEY}
|
|
987
|
+
- GROQ_API_KEY=${GROQ_API_KEY}
|
|
988
|
+
- GOOGLE_FREE_API_KEY=${GOOGLE_FREE_API_KEY}
|
|
989
|
+
volumes:
|
|
990
|
+
- llms-data:/home/llms/.llms
|
|
991
|
+
restart: unless-stopped
|
|
992
|
+
|
|
993
|
+
volumes:
|
|
994
|
+
llms-data:
|
|
995
|
+
```
|
|
996
|
+
|
|
997
|
+
Create a `.env` file with your API keys:
|
|
998
|
+
|
|
999
|
+
```bash
|
|
1000
|
+
OPENROUTER_API_KEY=sk-or-...
|
|
1001
|
+
GROQ_API_KEY=gsk_...
|
|
1002
|
+
GOOGLE_FREE_API_KEY=AIza...
|
|
1003
|
+
```
|
|
1004
|
+
|
|
1005
|
+
Start the service:
|
|
1006
|
+
|
|
1007
|
+
```bash
|
|
1008
|
+
docker-compose up -d
|
|
1009
|
+
```
|
|
1010
|
+
|
|
1011
|
+
### Building Locally
|
|
1012
|
+
|
|
1013
|
+
Build the Docker image from source:
|
|
1014
|
+
|
|
1015
|
+
```bash
|
|
1016
|
+
# Using the build script
|
|
1017
|
+
./docker-build.sh
|
|
1018
|
+
|
|
1019
|
+
# Or manually
|
|
1020
|
+
docker build -t llms-py:latest .
|
|
1021
|
+
|
|
1022
|
+
# Run your local build
|
|
1023
|
+
docker run -p 8000:8000 \
|
|
1024
|
+
-e OPENROUTER_API_KEY="your-key" \
|
|
1025
|
+
llms-py:latest
|
|
1026
|
+
```
|
|
1027
|
+
|
|
1028
|
+
### Volume Mounting
|
|
1029
|
+
|
|
1030
|
+
To persist configuration and analytics data between container restarts:
|
|
1031
|
+
|
|
1032
|
+
```bash
|
|
1033
|
+
# Using a named volume (recommended)
|
|
1034
|
+
docker run -p 8000:8000 \
|
|
1035
|
+
-v llms-data:/home/llms/.llms \
|
|
1036
|
+
-e OPENROUTER_API_KEY="your-key" \
|
|
1037
|
+
ghcr.io/servicestack/llms:latest
|
|
1038
|
+
|
|
1039
|
+
# Or mount a local directory
|
|
1040
|
+
docker run -p 8000:8000 \
|
|
1041
|
+
-v $(pwd)/llms-config:/home/llms/.llms \
|
|
1042
|
+
-e OPENROUTER_API_KEY="your-key" \
|
|
1043
|
+
ghcr.io/servicestack/llms:latest
|
|
1044
|
+
```
|
|
1045
|
+
|
|
1046
|
+
### Custom Configuration Files
|
|
1047
|
+
|
|
1048
|
+
Customize llms-py behavior by providing your own `llms.json` and `ui.json` files:
|
|
1049
|
+
|
|
1050
|
+
**Option 1: Mount a directory with custom configs**
|
|
1051
|
+
|
|
1052
|
+
```bash
|
|
1053
|
+
# Create config directory with your custom files
|
|
1054
|
+
mkdir -p config
|
|
1055
|
+
# Add your custom llms.json and ui.json to config/
|
|
1056
|
+
|
|
1057
|
+
# Mount the directory
|
|
1058
|
+
docker run -p 8000:8000 \
|
|
1059
|
+
-v $(pwd)/config:/home/llms/.llms \
|
|
1060
|
+
-e OPENROUTER_API_KEY="your-key" \
|
|
1061
|
+
ghcr.io/servicestack/llms:latest
|
|
1062
|
+
```
|
|
1063
|
+
|
|
1064
|
+
**Option 2: Mount individual config files**
|
|
1065
|
+
|
|
1066
|
+
```bash
|
|
1067
|
+
docker run -p 8000:8000 \
|
|
1068
|
+
-v $(pwd)/my-llms.json:/home/llms/.llms/llms.json:ro \
|
|
1069
|
+
-v $(pwd)/my-ui.json:/home/llms/.llms/ui.json:ro \
|
|
1070
|
+
-e OPENROUTER_API_KEY="your-key" \
|
|
1071
|
+
ghcr.io/servicestack/llms:latest
|
|
1072
|
+
```
|
|
1073
|
+
|
|
1074
|
+
**With docker-compose:**
|
|
1075
|
+
|
|
1076
|
+
```yaml
|
|
1077
|
+
volumes:
|
|
1078
|
+
# Use local directory
|
|
1079
|
+
- ./config:/home/llms/.llms
|
|
1080
|
+
|
|
1081
|
+
# Or mount individual files
|
|
1082
|
+
# - ./my-llms.json:/home/llms/.llms/llms.json:ro
|
|
1083
|
+
# - ./my-ui.json:/home/llms/.llms/ui.json:ro
|
|
1084
|
+
```
|
|
1085
|
+
|
|
1086
|
+
The container will auto-create default config files on first run if they don't exist. You can customize these to:
|
|
1087
|
+
- Enable/disable specific providers
|
|
1088
|
+
- Add or remove models
|
|
1089
|
+
- Configure API endpoints
|
|
1090
|
+
- Set custom pricing
|
|
1091
|
+
- Customize chat templates
|
|
1092
|
+
- Configure UI settings
|
|
1093
|
+
|
|
1094
|
+
See [DOCKER.md](DOCKER.md) for detailed configuration examples.
|
|
1095
|
+
|
|
1096
|
+
### Custom Port
|
|
1097
|
+
|
|
1098
|
+
Change the port mapping to run on a different port:
|
|
1099
|
+
|
|
1100
|
+
```bash
|
|
1101
|
+
# Run on port 3000 instead of 8000
|
|
1102
|
+
docker run -p 3000:8000 \
|
|
1103
|
+
-e OPENROUTER_API_KEY="your-key" \
|
|
1104
|
+
ghcr.io/servicestack/llms:latest
|
|
1105
|
+
```
|
|
1106
|
+
|
|
1107
|
+
### Docker CLI Usage
|
|
1108
|
+
|
|
1109
|
+
You can also use the Docker container for CLI commands:
|
|
1110
|
+
|
|
1111
|
+
```bash
|
|
1112
|
+
# Run a single query
|
|
1113
|
+
docker run --rm \
|
|
1114
|
+
-e OPENROUTER_API_KEY="your-key" \
|
|
1115
|
+
ghcr.io/servicestack/llms:latest \
|
|
1116
|
+
llms "What is the capital of France?"
|
|
1117
|
+
|
|
1118
|
+
# List available models
|
|
1119
|
+
docker run --rm \
|
|
1120
|
+
-e OPENROUTER_API_KEY="your-key" \
|
|
1121
|
+
ghcr.io/servicestack/llms:latest \
|
|
1122
|
+
llms --list
|
|
1123
|
+
|
|
1124
|
+
# Check provider status
|
|
1125
|
+
docker run --rm \
|
|
1126
|
+
-e GROQ_API_KEY="your-key" \
|
|
1127
|
+
ghcr.io/servicestack/llms:latest \
|
|
1128
|
+
llms --check groq
|
|
1129
|
+
```
|
|
1130
|
+
|
|
1131
|
+
### Health Checks
|
|
1132
|
+
|
|
1133
|
+
The Docker image includes a health check that verifies the server is responding:
|
|
1134
|
+
|
|
1135
|
+
```bash
|
|
1136
|
+
# Check container health
|
|
1137
|
+
docker ps
|
|
1138
|
+
|
|
1139
|
+
# View health check logs
|
|
1140
|
+
docker inspect --format='{{json .State.Health}}' llms-server
|
|
1141
|
+
```
|
|
1142
|
+
|
|
1143
|
+
### Multi-Architecture Support
|
|
1144
|
+
|
|
1145
|
+
The Docker images support multiple architectures:
|
|
1146
|
+
- `linux/amd64` (x86_64)
|
|
1147
|
+
- `linux/arm64` (ARM64/Apple Silicon)
|
|
1148
|
+
|
|
1149
|
+
Docker will automatically pull the correct image for your platform.
|
|
850
1150
|
|
|
851
1151
|
## Troubleshooting
|
|
852
1152
|
|
|
@@ -908,9 +1208,10 @@ This shows:
|
|
|
908
1208
|
|
|
909
1209
|
### Project Structure
|
|
910
1210
|
|
|
911
|
-
- `llms.py` - Main script with CLI and server functionality
|
|
912
|
-
- `llms.json` - Default configuration file
|
|
913
|
-
- `
|
|
1211
|
+
- `llms/main.py` - Main script with CLI and server functionality
|
|
1212
|
+
- `llms/llms.json` - Default configuration file
|
|
1213
|
+
- `llms/ui.json` - UI configuration file
|
|
1214
|
+
- `requirements.txt` - Python dependencies (aiohttp)
|
|
914
1215
|
|
|
915
1216
|
### Provider Classes
|
|
916
1217
|
|
|
@@ -1,16 +1,17 @@
|
|
|
1
1
|
llms/__init__.py,sha256=Mk6eHi13yoUxLlzhwfZ6A1IjsfSQt9ShhOdbLXTvffU,53
|
|
2
2
|
llms/__main__.py,sha256=hrBulHIt3lmPm1BCyAEVtB6DQ0Hvc3gnIddhHCmJasg,151
|
|
3
3
|
llms/index.html,sha256=OA9mRmgh-dQrPqb0Z2Jv-cwEZ3YLPRxcWUN7ASjxO8s,2658
|
|
4
|
-
llms/llms.json,sha256=
|
|
5
|
-
llms/main.py,sha256=
|
|
4
|
+
llms/llms.json,sha256=AFGc5-6gWH_czvz0dvkIRPFxkiZluaOwZBBScCVd5HU,41103
|
|
5
|
+
llms/main.py,sha256=yPY7xeVPhvPz3nHd34svZrqD5aOWEDRepotXTapwoAE,82441
|
|
6
6
|
llms/ui.json,sha256=iBOmpNeD5-o8AgUa51ymS-KemovJ7bm9J1fnL0nf8jk,134025
|
|
7
7
|
llms/ui/Analytics.mjs,sha256=mAS5AUQjpnEIMyzGzOGE6fZxwxoVyq5QCitYQSSCEpQ,69151
|
|
8
|
-
llms/ui/App.mjs,sha256=
|
|
9
|
-
llms/ui/Avatar.mjs,sha256=
|
|
8
|
+
llms/ui/App.mjs,sha256=fcDx0psdr4NFgkxotPVwC_uYbUHy1BoKxQWcOi3SMbM,631
|
|
9
|
+
llms/ui/Avatar.mjs,sha256=TgouwV9bN-Ou1Tf2zCDtVaRiUB21TXZZPFCTlFL-xxQ,3387
|
|
10
10
|
llms/ui/Brand.mjs,sha256=0NN2JBLUC0OWERuLz9myrimlcA7v7D5B_EMd0sQQVDo,1905
|
|
11
11
|
llms/ui/ChatPrompt.mjs,sha256=85O_kLVKWbbUDOUlvkuAineam_jrd6lzrj4O00p1XOg,21172
|
|
12
|
-
llms/ui/Main.mjs,sha256=
|
|
12
|
+
llms/ui/Main.mjs,sha256=8-LcEhAbB-HWDHtn0z1DFjgOsWvkcfEQqSUlhWozAVk,40745
|
|
13
13
|
llms/ui/ModelSelector.mjs,sha256=ASLTUaqig3cDMiGup01rpubC2RrrZvPd8IFrYcK8GyQ,2565
|
|
14
|
+
llms/ui/OAuthSignIn.mjs,sha256=4_j4IYzpw9P1ppzxn2QZJQksh9VB6Rfzg6Nf-TfXWSA,4701
|
|
14
15
|
llms/ui/ProviderIcon.mjs,sha256=HTjlgtXEpekn8iNN_S0uswbbvL0iGb20N15-_lXdojk,9054
|
|
15
16
|
llms/ui/ProviderStatus.mjs,sha256=qF_rPdhyt9GffKdPCJdU0yanrDJ3cw1HLPygFP_KjEs,5744
|
|
16
17
|
llms/ui/Recents.mjs,sha256=hmj7V-RXVw-DqMXjUr3OhFHTYQTkvkEhuNEDTGBf3Qw,8448
|
|
@@ -20,8 +21,8 @@ llms/ui/SignIn.mjs,sha256=df3b-7L3ZIneDGbJWUk93K9RGo40gVeuR5StzT1ZH9g,2324
|
|
|
20
21
|
llms/ui/SystemPromptEditor.mjs,sha256=2CyIUvkIubqYPyIp5zC6_I8CMxvYINuYNjDxvMz4VRU,1265
|
|
21
22
|
llms/ui/SystemPromptSelector.mjs,sha256=AuEtRwUf_RkGgene3nVA9bw8AeMb-b5_6ZLJCTWA8KQ,3051
|
|
22
23
|
llms/ui/Welcome.mjs,sha256=QFAxN7sjWlhMvOIJCmHjNFCQcvpM_T-b4ze1ld9Hj7I,912
|
|
23
|
-
llms/ui/ai.mjs,sha256=
|
|
24
|
-
llms/ui/app.css,sha256=
|
|
24
|
+
llms/ui/ai.mjs,sha256=oXfMQ7kCTm8PFq2bOs7flr5tn9PJa36mAyBg2L4SDIg,4768
|
|
25
|
+
llms/ui/app.css,sha256=dYJ83FUYz_j31nxaKKT25xgrYFcoJ0h9ybLum9_VouA,100019
|
|
25
26
|
llms/ui/fav.svg,sha256=_R6MFeXl6wBFT0lqcUxYQIDWgm246YH_3hSTW0oO8qw,734
|
|
26
27
|
llms/ui/markdown.mjs,sha256=O5UspOeD8-E23rxOLWcS4eyy2YejMbPwszCYteVtuoU,6221
|
|
27
28
|
llms/ui/tailwind.input.css,sha256=yo_3A50uyiVSUHUWeqAMorXMhCWpZoE5lTO6OJIFlYg,11974
|
|
@@ -39,9 +40,9 @@ llms/ui/lib/servicestack-vue.mjs,sha256=r_-khYokisXJAIPDLh8Wq6YtcLAY6HNjtJlCZJjL
|
|
|
39
40
|
llms/ui/lib/vue-router.min.mjs,sha256=fR30GHoXI1u81zyZ26YEU105pZgbbAKSXbpnzFKIxls,30418
|
|
40
41
|
llms/ui/lib/vue.min.mjs,sha256=iXh97m5hotl0eFllb3aoasQTImvp7mQoRJ_0HoxmZkw,163811
|
|
41
42
|
llms/ui/lib/vue.mjs,sha256=dS8LKOG01t9CvZ04i0tbFXHqFXOO_Ha4NmM3BytjQAs,537071
|
|
42
|
-
llms_py-2.0.
|
|
43
|
-
llms_py-2.0.
|
|
44
|
-
llms_py-2.0.
|
|
45
|
-
llms_py-2.0.
|
|
46
|
-
llms_py-2.0.
|
|
47
|
-
llms_py-2.0.
|
|
43
|
+
llms_py-2.0.26.dist-info/licenses/LICENSE,sha256=bus9cuAOWeYqBk2OuhSABVV1P4z7hgrEFISpyda_H5w,1532
|
|
44
|
+
llms_py-2.0.26.dist-info/METADATA,sha256=gKYhj_tL1Mw6HZLDKYYrQneBXoqlQy-aHp4ctWiIQYo,36283
|
|
45
|
+
llms_py-2.0.26.dist-info/WHEEL,sha256=_zCd3N1l69ArxyTb8rzEoP9TpbYXkqRFSNOD5OuxnTs,91
|
|
46
|
+
llms_py-2.0.26.dist-info/entry_points.txt,sha256=WswyE7PfnkZMIxboC-MS6flBD6wm-CYU7JSUnMhqMfM,40
|
|
47
|
+
llms_py-2.0.26.dist-info/top_level.txt,sha256=gC7hk9BKSeog8gyg-EM_g2gxm1mKHwFRfK-10BxOsa4,5
|
|
48
|
+
llms_py-2.0.26.dist-info/RECORD,,
|
|
@@ -1,6 +1,5 @@
|
|
|
1
1
|
Copyright (c) 2007-present, Demis Bellot, ServiceStack, Inc.
|
|
2
2
|
https://servicestack.net
|
|
3
|
-
All rights reserved.
|
|
4
3
|
|
|
5
4
|
Redistribution and use in source and binary forms, with or without
|
|
6
5
|
modification, are permitted provided that the following conditions are met:
|
|
@@ -9,7 +8,7 @@ modification, are permitted provided that the following conditions are met:
|
|
|
9
8
|
* Redistributions in binary form must reproduce the above copyright
|
|
10
9
|
notice, this list of conditions and the following disclaimer in the
|
|
11
10
|
documentation and/or other materials provided with the distribution.
|
|
12
|
-
* Neither the name of the
|
|
11
|
+
* Neither the name of the copyright holder nor the
|
|
13
12
|
names of its contributors may be used to endorse or promote products
|
|
14
13
|
derived from this software without specific prior written permission.
|
|
15
14
|
|
|
File without changes
|
|
File without changes
|
|
File without changes
|