llms-py 2.0.30__py3-none-any.whl → 2.0.32__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: llms-py
3
- Version: 2.0.30
3
+ Version: 2.0.32
4
4
  Summary: A lightweight CLI tool and OpenAI-compatible server for querying multiple Large Language Model (LLM) providers
5
5
  Home-page: https://github.com/ServiceStack/llms
6
6
  Author: ServiceStack
@@ -54,6 +54,7 @@ Configure additional providers and models in [llms.json](llms/llms.json)
54
54
  - **Multi-Provider Support**: OpenRouter, Ollama, Anthropic, Google, OpenAI, Grok, Groq, Qwen, Z.ai, Mistral
55
55
  - **OpenAI-Compatible API**: Works with any client that supports OpenAI's chat completion API
56
56
  - **Built-in Analytics**: Built-in analytics UI to visualize costs, requests, and token usage
57
+ - **GitHub OAuth**: Optionally Secure your web UI and restrict access to specified GitHub Users
57
58
  - **Configuration Management**: Easy provider enable/disable and configuration management
58
59
  - **CLI Interface**: Simple command-line interface for quick interactions
59
60
  - **Server Mode**: Run an OpenAI-compatible HTTP server at `http://localhost:{PORT}/v1/chat/completions`
@@ -108,6 +109,26 @@ As they're a good indicator for the reliability and speed you can expect from di
108
109
  test the response times for all configured providers and models, the results of which will be frequently published to
109
110
  [/checks/latest.txt](https://github.com/ServiceStack/llms/blob/main/docs/checks/latest.txt)
110
111
 
112
+ ## Change Log
113
+
114
+ #### v2.0.30 (2025-11-01)
115
+ - Improved Responsive Layout with collapsible Sidebar
116
+ - Watching config files for changes and auto-reloading
117
+ - Add cancel button to cancel pending request
118
+ - Return focus to textarea after request completes
119
+ - Clicking outside model or system prompt selector will collapse it
120
+ - Clicking on selected item no longer deselects it
121
+ - Support `VERBOSE=1` for enabling `--verbose` mode (useful in Docker)
122
+
123
+ #### v2.0.28 (2025-10-31)
124
+ - Dark Mode
125
+ - Drag n' Drop files in Message prompt
126
+ - Copy & Paste files in Message prompt
127
+ - Support for GitHub OAuth and optional restrict access to specified Users
128
+ - Support for Docker and Docker Compose
129
+
130
+ [llms.py Releases](https://github.com/ServiceStack/llms/releases)
131
+
111
132
  ## Installation
112
133
 
113
134
  ### Using pip
@@ -255,7 +276,10 @@ See [GITHUB_OAUTH_SETUP.md](GITHUB_OAUTH_SETUP.md) for detailed setup instructio
255
276
 
256
277
  ## Configuration
257
278
 
258
- The configuration file [llms.json](llms/llms.json) is saved to `~/.llms/llms.json` and defines available providers, models, and default settings. Key sections:
279
+ The configuration file [llms.json](llms/llms.json) is saved to `~/.llms/llms.json` and defines available providers, models, and default settings. If it doesn't exist, `llms.json` is auto created with the latest
280
+ configuration, so you can re-create it by deleting your local config (e.g. `rm -rf ~/.llms`).
281
+
282
+ Key sections:
259
283
 
260
284
  ### Defaults
261
285
  - `headers`: Common HTTP headers for all requests
@@ -268,7 +292,6 @@ The configuration file [llms.json](llms/llms.json) is saved to `~/.llms/llms.jso
268
292
  - `convert`: Max image size and length limits and auto conversion settings
269
293
 
270
294
  ### Providers
271
-
272
295
  Each provider configuration includes:
273
296
  - `enabled`: Whether the provider is active
274
297
  - `type`: Provider class (OpenAiProvider, GoogleProvider, etc.)
@@ -1,11 +1,11 @@
1
1
  llms/__init__.py,sha256=Mk6eHi13yoUxLlzhwfZ6A1IjsfSQt9ShhOdbLXTvffU,53
2
2
  llms/__main__.py,sha256=hrBulHIt3lmPm1BCyAEVtB6DQ0Hvc3gnIddhHCmJasg,151
3
3
  llms/index.html,sha256=_pkjdzCX95HTf19LgE4gMh6tLittcnf7M_jL2hSEbbM,3250
4
- llms/llms.json,sha256=oTMlVM3nYeooQgsPbIGN2LQ-1aq0u1v38sjmxHPiGAc,41331
5
- llms/main.py,sha256=TIJZL21UCZfyRMh_HPdsFi2ahpIFgs6o4KkRSkxTwqc,90617
4
+ llms/llms.json,sha256=4y4BbQcwul4HdUU9LfZOJWyUjTt0nFzGkT_Mlyd7dVU,31945
5
+ llms/main.py,sha256=z-lqetSCyTfhIANICBM3KrzUs_xSc427KohopzpklHA,90865
6
6
  llms/ui.json,sha256=iBOmpNeD5-o8AgUa51ymS-KemovJ7bm9J1fnL0nf8jk,134025
7
7
  llms/ui/Analytics.mjs,sha256=LfWbUlpb__0EEYtHu6e4r8AeyhsNQeAxrg44RuNSR0M,73261
8
- llms/ui/App.mjs,sha256=S-yomlT-6UMB9R5dx-W6EEO4pcE3ldV5e9mr1Kk_Lyw,3822
8
+ llms/ui/App.mjs,sha256=L8Zn7b7YVqR5jgVQKvo_txijSd1T7jq6QOQEt7Q0eB0,3811
9
9
  llms/ui/Avatar.mjs,sha256=TgouwV9bN-Ou1Tf2zCDtVaRiUB21TXZZPFCTlFL-xxQ,3387
10
10
  llms/ui/Brand.mjs,sha256=JLN_lPirNXqS332g0B_WVOlFRVg3lNe1Q56TRnpj0zQ,3411
11
11
  llms/ui/ChatPrompt.mjs,sha256=7Bx2-ossJPm8F2n9M82vNt8J-ayHEXft3qctd9TeSdw,27147
@@ -21,8 +21,8 @@ llms/ui/SignIn.mjs,sha256=df3b-7L3ZIneDGbJWUk93K9RGo40gVeuR5StzT1ZH9g,2324
21
21
  llms/ui/SystemPromptEditor.mjs,sha256=PffkNPV6hGbm1QZBKPI7yvWPZSBL7qla0d-JEJ4mxYo,1466
22
22
  llms/ui/SystemPromptSelector.mjs,sha256=UgoeuscFes0B1oFkx74dFwC0JgRib37VM4Gy3-kCVDQ,3769
23
23
  llms/ui/Welcome.mjs,sha256=r9j7unF9CF3k7gEQBMRMVsa2oSjgHGNn46Oa5l5BwlY,950
24
- llms/ui/ai.mjs,sha256=Mrhsr-Y9VRWRZ9oSMu4z6u6nxll-xlRz0Cj1R9jv72A,4849
25
- llms/ui/app.css,sha256=B0GMDo-hRJ5ufV4d1qYRpIX3LSnBfBZMHeqrDcC1z8A,113515
24
+ llms/ui/ai.mjs,sha256=yJTZUqDgbiAwhR_BtNTEw_5ef8b0pmN1S19HtbBG4HI,4849
25
+ llms/ui/app.css,sha256=m6wR6XCzJWbUs0K_MDyGbcnxsWOu2Q58nGpAL646kio,111026
26
26
  llms/ui/fav.svg,sha256=_R6MFeXl6wBFT0lqcUxYQIDWgm246YH_3hSTW0oO8qw,734
27
27
  llms/ui/markdown.mjs,sha256=uWSyBZZ8a76Dkt53q6CJzxg7Gkx7uayX089td3Srv8w,6388
28
28
  llms/ui/tailwind.input.css,sha256=QInTVDpCR89OTzRo9AePdAa-MX3i66RkhNOfa4_7UAg,12086
@@ -36,13 +36,13 @@ llms/ui/lib/highlight.min.mjs,sha256=sG7wq8bF-IKkfie7S4QSyh5DdHBRf0NqQxMOEH8-MT0
36
36
  llms/ui/lib/idb.min.mjs,sha256=CeTXyV4I_pB5vnibvJuyXdMs0iVF2ZL0Z7cdm3w_QaI,3853
37
37
  llms/ui/lib/marked.min.mjs,sha256=QRHb_VZugcBJRD2EP6gYlVFEsJw5C2fQ8ImMf_pA2_s,39488
38
38
  llms/ui/lib/servicestack-client.mjs,sha256=UVafVbzhJ_0N2lzv7rlzIbzwnWpoqXxGk3N3FSKgOOc,54534
39
- llms/ui/lib/servicestack-vue.mjs,sha256=EU3cnlQuTzsmPvoK50JFN98t4AO80vVNA-CS2kaS0nI,215162
39
+ llms/ui/lib/servicestack-vue.mjs,sha256=n1auUBv4nAdEwZoDiPBRSMizH6M9GP_5b8QSDCR-3wI,215269
40
40
  llms/ui/lib/vue-router.min.mjs,sha256=fR30GHoXI1u81zyZ26YEU105pZgbbAKSXbpnzFKIxls,30418
41
41
  llms/ui/lib/vue.min.mjs,sha256=iXh97m5hotl0eFllb3aoasQTImvp7mQoRJ_0HoxmZkw,163811
42
42
  llms/ui/lib/vue.mjs,sha256=dS8LKOG01t9CvZ04i0tbFXHqFXOO_Ha4NmM3BytjQAs,537071
43
- llms_py-2.0.30.dist-info/licenses/LICENSE,sha256=bus9cuAOWeYqBk2OuhSABVV1P4z7hgrEFISpyda_H5w,1532
44
- llms_py-2.0.30.dist-info/METADATA,sha256=Z8sG5vefO_CfmlEX3myTw0NF8diiyoMreMTNni4w2OY,37076
45
- llms_py-2.0.30.dist-info/WHEEL,sha256=_zCd3N1l69ArxyTb8rzEoP9TpbYXkqRFSNOD5OuxnTs,91
46
- llms_py-2.0.30.dist-info/entry_points.txt,sha256=WswyE7PfnkZMIxboC-MS6flBD6wm-CYU7JSUnMhqMfM,40
47
- llms_py-2.0.30.dist-info/top_level.txt,sha256=gC7hk9BKSeog8gyg-EM_g2gxm1mKHwFRfK-10BxOsa4,5
48
- llms_py-2.0.30.dist-info/RECORD,,
43
+ llms_py-2.0.32.dist-info/licenses/LICENSE,sha256=bus9cuAOWeYqBk2OuhSABVV1P4z7hgrEFISpyda_H5w,1532
44
+ llms_py-2.0.32.dist-info/METADATA,sha256=kEdwuKJiSHH-J1RIHldQp9w7j42tAQhmeoeczw6YL7A,38070
45
+ llms_py-2.0.32.dist-info/WHEEL,sha256=_zCd3N1l69ArxyTb8rzEoP9TpbYXkqRFSNOD5OuxnTs,91
46
+ llms_py-2.0.32.dist-info/entry_points.txt,sha256=WswyE7PfnkZMIxboC-MS6flBD6wm-CYU7JSUnMhqMfM,40
47
+ llms_py-2.0.32.dist-info/top_level.txt,sha256=gC7hk9BKSeog8gyg-EM_g2gxm1mKHwFRfK-10BxOsa4,5
48
+ llms_py-2.0.32.dist-info/RECORD,,