llms-py 2.0.10__tar.gz → 2.0.12__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (49) hide show
  1. {llms_py-2.0.10/llms_py.egg-info → llms_py-2.0.12}/PKG-INFO +33 -25
  2. {llms_py-2.0.10 → llms_py-2.0.12}/README.md +32 -24
  3. {llms_py-2.0.10 → llms_py-2.0.12}/llms.json +4 -1
  4. {llms_py-2.0.10 → llms_py-2.0.12}/llms.py +1 -1
  5. {llms_py-2.0.10 → llms_py-2.0.12/llms_py.egg-info}/PKG-INFO +33 -25
  6. {llms_py-2.0.10 → llms_py-2.0.12}/pyproject.toml +1 -1
  7. {llms_py-2.0.10 → llms_py-2.0.12}/setup.py +1 -1
  8. {llms_py-2.0.10 → llms_py-2.0.12}/ui/ai.mjs +1 -0
  9. {llms_py-2.0.10 → llms_py-2.0.12}/LICENSE +0 -0
  10. {llms_py-2.0.10 → llms_py-2.0.12}/MANIFEST.in +0 -0
  11. {llms_py-2.0.10 → llms_py-2.0.12}/index.html +0 -0
  12. {llms_py-2.0.10 → llms_py-2.0.12}/llms_py.egg-info/SOURCES.txt +0 -0
  13. {llms_py-2.0.10 → llms_py-2.0.12}/llms_py.egg-info/dependency_links.txt +0 -0
  14. {llms_py-2.0.10 → llms_py-2.0.12}/llms_py.egg-info/entry_points.txt +0 -0
  15. {llms_py-2.0.10 → llms_py-2.0.12}/llms_py.egg-info/not-zip-safe +0 -0
  16. {llms_py-2.0.10 → llms_py-2.0.12}/llms_py.egg-info/requires.txt +0 -0
  17. {llms_py-2.0.10 → llms_py-2.0.12}/llms_py.egg-info/top_level.txt +0 -0
  18. {llms_py-2.0.10 → llms_py-2.0.12}/requirements.txt +0 -0
  19. {llms_py-2.0.10 → llms_py-2.0.12}/setup.cfg +0 -0
  20. {llms_py-2.0.10 → llms_py-2.0.12}/ui/App.mjs +0 -0
  21. {llms_py-2.0.10 → llms_py-2.0.12}/ui/Avatar.mjs +0 -0
  22. {llms_py-2.0.10 → llms_py-2.0.12}/ui/Brand.mjs +0 -0
  23. {llms_py-2.0.10 → llms_py-2.0.12}/ui/ChatPrompt.mjs +0 -0
  24. {llms_py-2.0.10 → llms_py-2.0.12}/ui/Main.mjs +0 -0
  25. {llms_py-2.0.10 → llms_py-2.0.12}/ui/ModelSelector.mjs +0 -0
  26. {llms_py-2.0.10 → llms_py-2.0.12}/ui/ProviderStatus.mjs +0 -0
  27. {llms_py-2.0.10 → llms_py-2.0.12}/ui/Recents.mjs +0 -0
  28. {llms_py-2.0.10 → llms_py-2.0.12}/ui/SettingsDialog.mjs +0 -0
  29. {llms_py-2.0.10 → llms_py-2.0.12}/ui/Sidebar.mjs +0 -0
  30. {llms_py-2.0.10 → llms_py-2.0.12}/ui/SignIn.mjs +0 -0
  31. {llms_py-2.0.10 → llms_py-2.0.12}/ui/SystemPromptEditor.mjs +0 -0
  32. {llms_py-2.0.10 → llms_py-2.0.12}/ui/SystemPromptSelector.mjs +0 -0
  33. {llms_py-2.0.10 → llms_py-2.0.12}/ui/Welcome.mjs +0 -0
  34. {llms_py-2.0.10 → llms_py-2.0.12}/ui/app.css +0 -0
  35. {llms_py-2.0.10 → llms_py-2.0.12}/ui/fav.svg +0 -0
  36. {llms_py-2.0.10 → llms_py-2.0.12}/ui/lib/highlight.min.mjs +0 -0
  37. {llms_py-2.0.10 → llms_py-2.0.12}/ui/lib/idb.min.mjs +0 -0
  38. {llms_py-2.0.10 → llms_py-2.0.12}/ui/lib/marked.min.mjs +0 -0
  39. {llms_py-2.0.10 → llms_py-2.0.12}/ui/lib/servicestack-client.mjs +0 -0
  40. {llms_py-2.0.10 → llms_py-2.0.12}/ui/lib/servicestack-vue.mjs +0 -0
  41. {llms_py-2.0.10 → llms_py-2.0.12}/ui/lib/vue-router.min.mjs +0 -0
  42. {llms_py-2.0.10 → llms_py-2.0.12}/ui/lib/vue.min.mjs +0 -0
  43. {llms_py-2.0.10 → llms_py-2.0.12}/ui/lib/vue.mjs +0 -0
  44. {llms_py-2.0.10 → llms_py-2.0.12}/ui/markdown.mjs +0 -0
  45. {llms_py-2.0.10 → llms_py-2.0.12}/ui/tailwind.input.css +0 -0
  46. {llms_py-2.0.10 → llms_py-2.0.12}/ui/threadStore.mjs +0 -0
  47. {llms_py-2.0.10 → llms_py-2.0.12}/ui/typography.css +0 -0
  48. {llms_py-2.0.10 → llms_py-2.0.12}/ui/utils.mjs +0 -0
  49. {llms_py-2.0.10 → llms_py-2.0.12}/ui.json +0 -0
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: llms-py
3
- Version: 2.0.10
3
+ Version: 2.0.12
4
4
  Summary: A lightweight CLI tool and OpenAI-compatible server for querying multiple Large Language Model (LLM) providers
5
5
  Home-page: https://github.com/ServiceStack/llms
6
6
  Author: ServiceStack
@@ -97,34 +97,31 @@ pip install aiohttp
97
97
 
98
98
  ## Quick Start
99
99
 
100
- ### 1. Initialize Configuration
101
-
102
- Create a default configuration file:
103
-
104
- ```bash
105
- llms --init
106
- ```
107
-
108
- This saves the latest [llms.json](llms.json) configuration to `~/.llms/llms.json`.
109
-
110
- Modify `~/.llms/llms.json` to enable providers, add required API keys, additional models or any custom
111
- OpenAI-compatible providers.
112
-
113
- ### 2. Set API Keys
100
+ ### 1. Set API Keys
114
101
 
115
102
  Set environment variables for the providers you want to use:
116
103
 
117
104
  ```bash
118
- export OPENROUTER_API_KEY="..."
119
- export GROQ_API_KEY="..."
120
- export GOOGLE_API_KEY="..."
121
- export ANTHROPIC_API_KEY="..."
122
- export GROK_API_KEY="..."
123
- export DASHSCOPE_API_KEY="..."
124
- # ... etc
105
+ export OPENROUTER_FREE_API_KEY="..."
125
106
  ```
126
107
 
127
- ### 3. Enable Providers
108
+ | Provider | Variable | Description | Example |
109
+ |-----------------|---------------------------|---------------------|---------|
110
+ | openrouter_free | `OPENROUTER_FREE_API_KEY` | OpenRouter FREE models API key | `sk-or-...` |
111
+ | groq | `GROQ_API_KEY` | Groq API key | `gsk_...` |
112
+ | google_free | `GOOGLE_FREE_API_KEY` | Google FREE API key | `AIza...` |
113
+ | codestral | `CODESTRAL_API_KEY` | Codestral API key | `...` |
114
+ | ollama | N/A | No API key required | |
115
+ | openrouter | `OPENROUTER_API_KEY` | OpenRouter API key | `sk-or-...` |
116
+ | google | `GOOGLE_API_KEY` | Google API key | `AIza...` |
117
+ | anthropic | `ANTHROPIC_API_KEY` | Anthropic API key | `sk-ant-...` |
118
+ | openai | `OPENAI_API_KEY` | OpenAI API key | `sk-...` |
119
+ | grok | `GROK_API_KEY` | Grok (X.AI) API key | `xai-...` |
120
+ | qwen | `DASHSCOPE_API_KEY` | Qwen (Alibaba) API key | `sk-...` |
121
+ | z.ai | `ZAI_API_KEY` | Z.ai API key | `sk-...` |
122
+ | mistral | `MISTRAL_API_KEY` | Mistral API key | `...` |
123
+
124
+ ### 2. Enable Providers
128
125
 
129
126
  Enable the providers you want to use:
130
127
 
@@ -136,7 +133,17 @@ llms --enable openrouter_free google_free groq
136
133
  llms --enable openrouter anthropic google openai mistral grok qwen
137
134
  ```
138
135
 
139
- ### 4. Start Chatting
136
+ ### 3. Run UI
137
+
138
+ Start the UI and an OpenAI compatible API on port **8000**:
139
+
140
+ ```bash
141
+ llms --serve 8000
142
+ ```
143
+
144
+ Launches the UI at `http://localhost:8000` and an OpenAI Endpoint at `http://localhost:8000/v1/chat/completions`.
145
+
146
+ ### 4. Use llms.py CLI
140
147
 
141
148
  ```bash
142
149
  llms "What is the capital of France?"
@@ -144,7 +151,7 @@ llms "What is the capital of France?"
144
151
 
145
152
  ## Configuration
146
153
 
147
- The configuration file (`llms.json`) defines available providers, models, and default settings. Key sections:
154
+ The configuration file [llms.json](llms.json) is saved to `~/.llms/llms.json` and defines available providers, models, and default settings. Key sections:
148
155
 
149
156
  ### Defaults
150
157
  - `headers`: Common HTTP headers for all requests
@@ -159,6 +166,7 @@ Each provider configuration includes:
159
166
  - `base_url`: API endpoint URL
160
167
  - `models`: Model name mappings (local name → provider name)
161
168
 
169
+
162
170
  ## Command Line Usage
163
171
 
164
172
  ### Basic Chat
@@ -57,34 +57,31 @@ pip install aiohttp
57
57
 
58
58
  ## Quick Start
59
59
 
60
- ### 1. Initialize Configuration
61
-
62
- Create a default configuration file:
63
-
64
- ```bash
65
- llms --init
66
- ```
67
-
68
- This saves the latest [llms.json](llms.json) configuration to `~/.llms/llms.json`.
69
-
70
- Modify `~/.llms/llms.json` to enable providers, add required API keys, additional models or any custom
71
- OpenAI-compatible providers.
72
-
73
- ### 2. Set API Keys
60
+ ### 1. Set API Keys
74
61
 
75
62
  Set environment variables for the providers you want to use:
76
63
 
77
64
  ```bash
78
- export OPENROUTER_API_KEY="..."
79
- export GROQ_API_KEY="..."
80
- export GOOGLE_API_KEY="..."
81
- export ANTHROPIC_API_KEY="..."
82
- export GROK_API_KEY="..."
83
- export DASHSCOPE_API_KEY="..."
84
- # ... etc
65
+ export OPENROUTER_FREE_API_KEY="..."
85
66
  ```
86
67
 
87
- ### 3. Enable Providers
68
+ | Provider | Variable | Description | Example |
69
+ |-----------------|---------------------------|---------------------|---------|
70
+ | openrouter_free | `OPENROUTER_FREE_API_KEY` | OpenRouter FREE models API key | `sk-or-...` |
71
+ | groq | `GROQ_API_KEY` | Groq API key | `gsk_...` |
72
+ | google_free | `GOOGLE_FREE_API_KEY` | Google FREE API key | `AIza...` |
73
+ | codestral | `CODESTRAL_API_KEY` | Codestral API key | `...` |
74
+ | ollama | N/A | No API key required | |
75
+ | openrouter | `OPENROUTER_API_KEY` | OpenRouter API key | `sk-or-...` |
76
+ | google | `GOOGLE_API_KEY` | Google API key | `AIza...` |
77
+ | anthropic | `ANTHROPIC_API_KEY` | Anthropic API key | `sk-ant-...` |
78
+ | openai | `OPENAI_API_KEY` | OpenAI API key | `sk-...` |
79
+ | grok | `GROK_API_KEY` | Grok (X.AI) API key | `xai-...` |
80
+ | qwen | `DASHSCOPE_API_KEY` | Qwen (Alibaba) API key | `sk-...` |
81
+ | z.ai | `ZAI_API_KEY` | Z.ai API key | `sk-...` |
82
+ | mistral | `MISTRAL_API_KEY` | Mistral API key | `...` |
83
+
84
+ ### 2. Enable Providers
88
85
 
89
86
  Enable the providers you want to use:
90
87
 
@@ -96,7 +93,17 @@ llms --enable openrouter_free google_free groq
96
93
  llms --enable openrouter anthropic google openai mistral grok qwen
97
94
  ```
98
95
 
99
- ### 4. Start Chatting
96
+ ### 3. Run UI
97
+
98
+ Start the UI and an OpenAI compatible API on port **8000**:
99
+
100
+ ```bash
101
+ llms --serve 8000
102
+ ```
103
+
104
+ Launches the UI at `http://localhost:8000` and an OpenAI Endpoint at `http://localhost:8000/v1/chat/completions`.
105
+
106
+ ### 4. Use llms.py CLI
100
107
 
101
108
  ```bash
102
109
  llms "What is the capital of France?"
@@ -104,7 +111,7 @@ llms "What is the capital of France?"
104
111
 
105
112
  ## Configuration
106
113
 
107
- The configuration file (`llms.json`) defines available providers, models, and default settings. Key sections:
114
+ The configuration file [llms.json](llms.json) is saved to `~/.llms/llms.json` and defines available providers, models, and default settings. Key sections:
108
115
 
109
116
  ### Defaults
110
117
  - `headers`: Common HTTP headers for all requests
@@ -119,6 +126,7 @@ Each provider configuration includes:
119
126
  - `base_url`: API endpoint URL
120
127
  - `models`: Model name mappings (local name → provider name)
121
128
 
129
+
122
130
  ## Command Line Usage
123
131
 
124
132
  ### Basic Chat
@@ -86,7 +86,7 @@
86
86
  "enabled": true,
87
87
  "type": "OpenAiProvider",
88
88
  "base_url": "https://openrouter.ai/api",
89
- "api_key": "$OPENROUTER_FREE_API_KEY",
89
+ "api_key": "$OPENROUTER_API_KEY",
90
90
  "models": {
91
91
  "qwen2.5vl": "qwen/qwen2.5-vl-32b-instruct:free",
92
92
  "llama4:109b": "meta-llama/llama-4-scout:free",
@@ -236,6 +236,7 @@
236
236
  "qwen3-max": "qwen/qwen3-max",
237
237
  "qwen3-vl:235b": "qwen/qwen3-vl-235b-a22b-instruct",
238
238
  "qwen3-vl-thinking:235b": "qwen/qwen3-vl-235b-a22b-thinking",
239
+ "ling-1t": "inclusionai/ling-1t",
239
240
  "llama4:109b": "meta-llama/llama-4-scout",
240
241
  "llama4:400b": "meta-llama/llama-4-maverick"
241
242
  }
@@ -283,6 +284,7 @@
283
284
  "base_url": "https://api.openai.com",
284
285
  "api_key": "$OPENAI_API_KEY",
285
286
  "models": {
287
+ "gpt-5-pro": "openai/gpt-5-pro",
286
288
  "gpt-5-codex": "gpt-5-codex",
287
289
  "gpt-audio": "gpt-audio",
288
290
  "gpt-realtime": "gpt-realtime",
@@ -386,6 +388,7 @@
386
388
  "qwen3-coder:30b": "qwen3-coder-30b-a3b-instruct",
387
389
  "qwen3-vl-thinking:235b": "qwen3-vl-235b-a22b-thinking",
388
390
  "qwen3-vl:235b": "qwen3-vl-235b-a22b-instruct",
391
+ "qwen3-vl:30b": "qwen3-vl-30b-a3b-instruct",
389
392
  "qwen2.5-vl:72b": "qwen2.5-vl-72b-instruct",
390
393
  "qwen2.5-vl:32b": "qwen2.5-vl-32b-instruct",
391
394
  "qwen2.5-vl:7b": "qwen2.5-vl-7b-instruct",
@@ -22,7 +22,7 @@ from aiohttp import web
22
22
  from pathlib import Path
23
23
  from importlib import resources # Py≥3.9 (pip install importlib_resources for 3.7/3.8)
24
24
 
25
- VERSION = "2.0.10"
25
+ VERSION = "2.0.12"
26
26
  _ROOT = None
27
27
  g_config_path = None
28
28
  g_ui_path = None
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: llms-py
3
- Version: 2.0.10
3
+ Version: 2.0.12
4
4
  Summary: A lightweight CLI tool and OpenAI-compatible server for querying multiple Large Language Model (LLM) providers
5
5
  Home-page: https://github.com/ServiceStack/llms
6
6
  Author: ServiceStack
@@ -97,34 +97,31 @@ pip install aiohttp
97
97
 
98
98
  ## Quick Start
99
99
 
100
- ### 1. Initialize Configuration
101
-
102
- Create a default configuration file:
103
-
104
- ```bash
105
- llms --init
106
- ```
107
-
108
- This saves the latest [llms.json](llms.json) configuration to `~/.llms/llms.json`.
109
-
110
- Modify `~/.llms/llms.json` to enable providers, add required API keys, additional models or any custom
111
- OpenAI-compatible providers.
112
-
113
- ### 2. Set API Keys
100
+ ### 1. Set API Keys
114
101
 
115
102
  Set environment variables for the providers you want to use:
116
103
 
117
104
  ```bash
118
- export OPENROUTER_API_KEY="..."
119
- export GROQ_API_KEY="..."
120
- export GOOGLE_API_KEY="..."
121
- export ANTHROPIC_API_KEY="..."
122
- export GROK_API_KEY="..."
123
- export DASHSCOPE_API_KEY="..."
124
- # ... etc
105
+ export OPENROUTER_FREE_API_KEY="..."
125
106
  ```
126
107
 
127
- ### 3. Enable Providers
108
+ | Provider | Variable | Description | Example |
109
+ |-----------------|---------------------------|---------------------|---------|
110
+ | openrouter_free | `OPENROUTER_FREE_API_KEY` | OpenRouter FREE models API key | `sk-or-...` |
111
+ | groq | `GROQ_API_KEY` | Groq API key | `gsk_...` |
112
+ | google_free | `GOOGLE_FREE_API_KEY` | Google FREE API key | `AIza...` |
113
+ | codestral | `CODESTRAL_API_KEY` | Codestral API key | `...` |
114
+ | ollama | N/A | No API key required | |
115
+ | openrouter | `OPENROUTER_API_KEY` | OpenRouter API key | `sk-or-...` |
116
+ | google | `GOOGLE_API_KEY` | Google API key | `AIza...` |
117
+ | anthropic | `ANTHROPIC_API_KEY` | Anthropic API key | `sk-ant-...` |
118
+ | openai | `OPENAI_API_KEY` | OpenAI API key | `sk-...` |
119
+ | grok | `GROK_API_KEY` | Grok (X.AI) API key | `xai-...` |
120
+ | qwen | `DASHSCOPE_API_KEY` | Qwen (Alibaba) API key | `sk-...` |
121
+ | z.ai | `ZAI_API_KEY` | Z.ai API key | `sk-...` |
122
+ | mistral | `MISTRAL_API_KEY` | Mistral API key | `...` |
123
+
124
+ ### 2. Enable Providers
128
125
 
129
126
  Enable the providers you want to use:
130
127
 
@@ -136,7 +133,17 @@ llms --enable openrouter_free google_free groq
136
133
  llms --enable openrouter anthropic google openai mistral grok qwen
137
134
  ```
138
135
 
139
- ### 4. Start Chatting
136
+ ### 3. Run UI
137
+
138
+ Start the UI and an OpenAI compatible API on port **8000**:
139
+
140
+ ```bash
141
+ llms --serve 8000
142
+ ```
143
+
144
+ Launches the UI at `http://localhost:8000` and an OpenAI Endpoint at `http://localhost:8000/v1/chat/completions`.
145
+
146
+ ### 4. Use llms.py CLI
140
147
 
141
148
  ```bash
142
149
  llms "What is the capital of France?"
@@ -144,7 +151,7 @@ llms "What is the capital of France?"
144
151
 
145
152
  ## Configuration
146
153
 
147
- The configuration file (`llms.json`) defines available providers, models, and default settings. Key sections:
154
+ The configuration file [llms.json](llms.json) is saved to `~/.llms/llms.json` and defines available providers, models, and default settings. Key sections:
148
155
 
149
156
  ### Defaults
150
157
  - `headers`: Common HTTP headers for all requests
@@ -159,6 +166,7 @@ Each provider configuration includes:
159
166
  - `base_url`: API endpoint URL
160
167
  - `models`: Model name mappings (local name → provider name)
161
168
 
169
+
162
170
  ## Command Line Usage
163
171
 
164
172
  ### Basic Chat
@@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta"
4
4
 
5
5
  [project]
6
6
  name = "llms-py"
7
- version = "2.0.10"
7
+ version = "2.0.12"
8
8
  description = "A lightweight CLI tool and OpenAI-compatible server for querying multiple Large Language Model (LLM) providers"
9
9
  readme = "README.md"
10
10
  license = "BSD-3-Clause"
@@ -16,7 +16,7 @@ with open(os.path.join(this_directory, "requirements.txt"), encoding="utf-8") as
16
16
 
17
17
  setup(
18
18
  name="llms-py",
19
- version="2.0.10",
19
+ version="2.0.12",
20
20
  author="ServiceStack",
21
21
  author_email="team@servicestack.net",
22
22
  description="A lightweight CLI tool and OpenAI-compatible server for querying multiple Large Language Model (LLM) providers",
@@ -6,6 +6,7 @@ const headers = { 'Accept': 'application/json' }
6
6
  const prefsKey = 'llms.prefs'
7
7
 
8
8
  export const o = {
9
+ version: '2.0.12',
9
10
  base,
10
11
  prefsKey,
11
12
  welcome: 'Welcome to llms.py',
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes