llms-py 2.0.30__tar.gz → 2.0.32__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (58) hide show
  1. {llms_py-2.0.30/llms_py.egg-info → llms_py-2.0.32}/PKG-INFO +26 -3
  2. {llms_py-2.0.30 → llms_py-2.0.32}/README.md +25 -2
  3. llms_py-2.0.32/llms/llms.json +1121 -0
  4. {llms_py-2.0.30 → llms_py-2.0.32}/llms/main.py +6 -1
  5. {llms_py-2.0.30 → llms_py-2.0.32}/llms/ui/App.mjs +1 -1
  6. {llms_py-2.0.30 → llms_py-2.0.32}/llms/ui/ai.mjs +1 -1
  7. {llms_py-2.0.30 → llms_py-2.0.32}/llms/ui/app.css +0 -118
  8. {llms_py-2.0.30 → llms_py-2.0.32}/llms/ui/lib/servicestack-vue.mjs +9 -9
  9. {llms_py-2.0.30 → llms_py-2.0.32/llms_py.egg-info}/PKG-INFO +26 -3
  10. {llms_py-2.0.30 → llms_py-2.0.32}/pyproject.toml +1 -1
  11. {llms_py-2.0.30 → llms_py-2.0.32}/setup.py +1 -1
  12. llms_py-2.0.30/llms/llms.json +0 -1118
  13. {llms_py-2.0.30 → llms_py-2.0.32}/LICENSE +0 -0
  14. {llms_py-2.0.30 → llms_py-2.0.32}/MANIFEST.in +0 -0
  15. {llms_py-2.0.30 → llms_py-2.0.32}/llms/__init__.py +0 -0
  16. {llms_py-2.0.30 → llms_py-2.0.32}/llms/__main__.py +0 -0
  17. {llms_py-2.0.30 → llms_py-2.0.32}/llms/index.html +0 -0
  18. {llms_py-2.0.30 → llms_py-2.0.32}/llms/ui/Analytics.mjs +0 -0
  19. {llms_py-2.0.30 → llms_py-2.0.32}/llms/ui/Avatar.mjs +0 -0
  20. {llms_py-2.0.30 → llms_py-2.0.32}/llms/ui/Brand.mjs +0 -0
  21. {llms_py-2.0.30 → llms_py-2.0.32}/llms/ui/ChatPrompt.mjs +0 -0
  22. {llms_py-2.0.30 → llms_py-2.0.32}/llms/ui/Main.mjs +0 -0
  23. {llms_py-2.0.30 → llms_py-2.0.32}/llms/ui/ModelSelector.mjs +0 -0
  24. {llms_py-2.0.30 → llms_py-2.0.32}/llms/ui/OAuthSignIn.mjs +0 -0
  25. {llms_py-2.0.30 → llms_py-2.0.32}/llms/ui/ProviderIcon.mjs +0 -0
  26. {llms_py-2.0.30 → llms_py-2.0.32}/llms/ui/ProviderStatus.mjs +0 -0
  27. {llms_py-2.0.30 → llms_py-2.0.32}/llms/ui/Recents.mjs +0 -0
  28. {llms_py-2.0.30 → llms_py-2.0.32}/llms/ui/SettingsDialog.mjs +0 -0
  29. {llms_py-2.0.30 → llms_py-2.0.32}/llms/ui/Sidebar.mjs +0 -0
  30. {llms_py-2.0.30 → llms_py-2.0.32}/llms/ui/SignIn.mjs +0 -0
  31. {llms_py-2.0.30 → llms_py-2.0.32}/llms/ui/SystemPromptEditor.mjs +0 -0
  32. {llms_py-2.0.30 → llms_py-2.0.32}/llms/ui/SystemPromptSelector.mjs +0 -0
  33. {llms_py-2.0.30 → llms_py-2.0.32}/llms/ui/Welcome.mjs +0 -0
  34. {llms_py-2.0.30 → llms_py-2.0.32}/llms/ui/fav.svg +0 -0
  35. {llms_py-2.0.30 → llms_py-2.0.32}/llms/ui/lib/chart.js +0 -0
  36. {llms_py-2.0.30 → llms_py-2.0.32}/llms/ui/lib/charts.mjs +0 -0
  37. {llms_py-2.0.30 → llms_py-2.0.32}/llms/ui/lib/color.js +0 -0
  38. {llms_py-2.0.30 → llms_py-2.0.32}/llms/ui/lib/highlight.min.mjs +0 -0
  39. {llms_py-2.0.30 → llms_py-2.0.32}/llms/ui/lib/idb.min.mjs +0 -0
  40. {llms_py-2.0.30 → llms_py-2.0.32}/llms/ui/lib/marked.min.mjs +0 -0
  41. {llms_py-2.0.30 → llms_py-2.0.32}/llms/ui/lib/servicestack-client.mjs +0 -0
  42. {llms_py-2.0.30 → llms_py-2.0.32}/llms/ui/lib/vue-router.min.mjs +0 -0
  43. {llms_py-2.0.30 → llms_py-2.0.32}/llms/ui/lib/vue.min.mjs +0 -0
  44. {llms_py-2.0.30 → llms_py-2.0.32}/llms/ui/lib/vue.mjs +0 -0
  45. {llms_py-2.0.30 → llms_py-2.0.32}/llms/ui/markdown.mjs +0 -0
  46. {llms_py-2.0.30 → llms_py-2.0.32}/llms/ui/tailwind.input.css +0 -0
  47. {llms_py-2.0.30 → llms_py-2.0.32}/llms/ui/threadStore.mjs +0 -0
  48. {llms_py-2.0.30 → llms_py-2.0.32}/llms/ui/typography.css +0 -0
  49. {llms_py-2.0.30 → llms_py-2.0.32}/llms/ui/utils.mjs +0 -0
  50. {llms_py-2.0.30 → llms_py-2.0.32}/llms/ui.json +0 -0
  51. {llms_py-2.0.30 → llms_py-2.0.32}/llms_py.egg-info/SOURCES.txt +0 -0
  52. {llms_py-2.0.30 → llms_py-2.0.32}/llms_py.egg-info/dependency_links.txt +0 -0
  53. {llms_py-2.0.30 → llms_py-2.0.32}/llms_py.egg-info/entry_points.txt +0 -0
  54. {llms_py-2.0.30 → llms_py-2.0.32}/llms_py.egg-info/not-zip-safe +0 -0
  55. {llms_py-2.0.30 → llms_py-2.0.32}/llms_py.egg-info/requires.txt +0 -0
  56. {llms_py-2.0.30 → llms_py-2.0.32}/llms_py.egg-info/top_level.txt +0 -0
  57. {llms_py-2.0.30 → llms_py-2.0.32}/requirements.txt +0 -0
  58. {llms_py-2.0.30 → llms_py-2.0.32}/setup.cfg +0 -0
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: llms-py
3
- Version: 2.0.30
3
+ Version: 2.0.32
4
4
  Summary: A lightweight CLI tool and OpenAI-compatible server for querying multiple Large Language Model (LLM) providers
5
5
  Home-page: https://github.com/ServiceStack/llms
6
6
  Author: ServiceStack
@@ -54,6 +54,7 @@ Configure additional providers and models in [llms.json](llms/llms.json)
54
54
  - **Multi-Provider Support**: OpenRouter, Ollama, Anthropic, Google, OpenAI, Grok, Groq, Qwen, Z.ai, Mistral
55
55
  - **OpenAI-Compatible API**: Works with any client that supports OpenAI's chat completion API
56
56
  - **Built-in Analytics**: Built-in analytics UI to visualize costs, requests, and token usage
57
+ - **GitHub OAuth**: Optionally Secure your web UI and restrict access to specified GitHub Users
57
58
  - **Configuration Management**: Easy provider enable/disable and configuration management
58
59
  - **CLI Interface**: Simple command-line interface for quick interactions
59
60
  - **Server Mode**: Run an OpenAI-compatible HTTP server at `http://localhost:{PORT}/v1/chat/completions`
@@ -108,6 +109,26 @@ As they're a good indicator for the reliability and speed you can expect from di
108
109
  test the response times for all configured providers and models, the results of which will be frequently published to
109
110
  [/checks/latest.txt](https://github.com/ServiceStack/llms/blob/main/docs/checks/latest.txt)
110
111
 
112
+ ## Change Log
113
+
114
+ #### v2.0.30 (2025-11-01)
115
+ - Improved Responsive Layout with collapsible Sidebar
116
+ - Watching config files for changes and auto-reloading
117
+ - Add cancel button to cancel pending request
118
+ - Return focus to textarea after request completes
119
+ - Clicking outside model or system prompt selector will collapse it
120
+ - Clicking on selected item no longer deselects it
121
+ - Support `VERBOSE=1` for enabling `--verbose` mode (useful in Docker)
122
+
123
+ #### v2.0.28 (2025-10-31)
124
+ - Dark Mode
125
+ - Drag n' Drop files in Message prompt
126
+ - Copy & Paste files in Message prompt
127
+ - Support for GitHub OAuth and optional restrict access to specified Users
128
+ - Support for Docker and Docker Compose
129
+
130
+ [llms.py Releases](https://github.com/ServiceStack/llms/releases)
131
+
111
132
  ## Installation
112
133
 
113
134
  ### Using pip
@@ -255,7 +276,10 @@ See [GITHUB_OAUTH_SETUP.md](GITHUB_OAUTH_SETUP.md) for detailed setup instructio
255
276
 
256
277
  ## Configuration
257
278
 
258
- The configuration file [llms.json](llms/llms.json) is saved to `~/.llms/llms.json` and defines available providers, models, and default settings. Key sections:
279
+ The configuration file [llms.json](llms/llms.json) is saved to `~/.llms/llms.json` and defines available providers, models, and default settings. If it doesn't exist, `llms.json` is auto created with the latest
280
+ configuration, so you can re-create it by deleting your local config (e.g. `rm -rf ~/.llms`).
281
+
282
+ Key sections:
259
283
 
260
284
  ### Defaults
261
285
  - `headers`: Common HTTP headers for all requests
@@ -268,7 +292,6 @@ The configuration file [llms.json](llms/llms.json) is saved to `~/.llms/llms.jso
268
292
  - `convert`: Max image size and length limits and auto conversion settings
269
293
 
270
294
  ### Providers
271
-
272
295
  Each provider configuration includes:
273
296
  - `enabled`: Whether the provider is active
274
297
  - `type`: Provider class (OpenAiProvider, GoogleProvider, etc.)
@@ -14,6 +14,7 @@ Configure additional providers and models in [llms.json](llms/llms.json)
14
14
  - **Multi-Provider Support**: OpenRouter, Ollama, Anthropic, Google, OpenAI, Grok, Groq, Qwen, Z.ai, Mistral
15
15
  - **OpenAI-Compatible API**: Works with any client that supports OpenAI's chat completion API
16
16
  - **Built-in Analytics**: Built-in analytics UI to visualize costs, requests, and token usage
17
+ - **GitHub OAuth**: Optionally Secure your web UI and restrict access to specified GitHub Users
17
18
  - **Configuration Management**: Easy provider enable/disable and configuration management
18
19
  - **CLI Interface**: Simple command-line interface for quick interactions
19
20
  - **Server Mode**: Run an OpenAI-compatible HTTP server at `http://localhost:{PORT}/v1/chat/completions`
@@ -68,6 +69,26 @@ As they're a good indicator for the reliability and speed you can expect from di
68
69
  test the response times for all configured providers and models, the results of which will be frequently published to
69
70
  [/checks/latest.txt](https://github.com/ServiceStack/llms/blob/main/docs/checks/latest.txt)
70
71
 
72
+ ## Change Log
73
+
74
+ #### v2.0.30 (2025-11-01)
75
+ - Improved Responsive Layout with collapsible Sidebar
76
+ - Watching config files for changes and auto-reloading
77
+ - Add cancel button to cancel pending request
78
+ - Return focus to textarea after request completes
79
+ - Clicking outside model or system prompt selector will collapse it
80
+ - Clicking on selected item no longer deselects it
81
+ - Support `VERBOSE=1` for enabling `--verbose` mode (useful in Docker)
82
+
83
+ #### v2.0.28 (2025-10-31)
84
+ - Dark Mode
85
+ - Drag n' Drop files in Message prompt
86
+ - Copy & Paste files in Message prompt
87
+ - Support for GitHub OAuth and optional restrict access to specified Users
88
+ - Support for Docker and Docker Compose
89
+
90
+ [llms.py Releases](https://github.com/ServiceStack/llms/releases)
91
+
71
92
  ## Installation
72
93
 
73
94
  ### Using pip
@@ -215,7 +236,10 @@ See [GITHUB_OAUTH_SETUP.md](GITHUB_OAUTH_SETUP.md) for detailed setup instructio
215
236
 
216
237
  ## Configuration
217
238
 
218
- The configuration file [llms.json](llms/llms.json) is saved to `~/.llms/llms.json` and defines available providers, models, and default settings. Key sections:
239
+ The configuration file [llms.json](llms/llms.json) is saved to `~/.llms/llms.json` and defines available providers, models, and default settings. If it doesn't exist, `llms.json` is auto created with the latest
240
+ configuration, so you can re-create it by deleting your local config (e.g. `rm -rf ~/.llms`).
241
+
242
+ Key sections:
219
243
 
220
244
  ### Defaults
221
245
  - `headers`: Common HTTP headers for all requests
@@ -228,7 +252,6 @@ The configuration file [llms.json](llms/llms.json) is saved to `~/.llms/llms.jso
228
252
  - `convert`: Max image size and length limits and auto conversion settings
229
253
 
230
254
  ### Providers
231
-
232
255
  Each provider configuration includes:
233
256
  - `enabled`: Whether the provider is active
234
257
  - `type`: Provider class (OpenAiProvider, GoogleProvider, etc.)