kalavai-client 0.6.19__tar.gz → 0.6.20__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (25) hide show
  1. {kalavai_client-0.6.19 → kalavai_client-0.6.20}/PKG-INFO +32 -22
  2. {kalavai_client-0.6.19 → kalavai_client-0.6.20}/README.md +31 -21
  3. kalavai_client-0.6.20/kalavai_client/__init__.py +2 -0
  4. {kalavai_client-0.6.19 → kalavai_client-0.6.20}/kalavai_client/assets/apps.yaml +1 -1
  5. {kalavai_client-0.6.19 → kalavai_client-0.6.20}/pyproject.toml +1 -1
  6. kalavai_client-0.6.19/kalavai_client/__init__.py +0 -2
  7. {kalavai_client-0.6.19 → kalavai_client-0.6.20}/LICENSE +0 -0
  8. {kalavai_client-0.6.19 → kalavai_client-0.6.20}/kalavai_client/__main__.py +0 -0
  9. {kalavai_client-0.6.19 → kalavai_client-0.6.20}/kalavai_client/assets/__init__.py +0 -0
  10. {kalavai_client-0.6.19 → kalavai_client-0.6.20}/kalavai_client/assets/apps_values.yaml +0 -0
  11. {kalavai_client-0.6.19 → kalavai_client-0.6.20}/kalavai_client/assets/docker-compose-gui.yaml +0 -0
  12. {kalavai_client-0.6.19 → kalavai_client-0.6.20}/kalavai_client/assets/docker-compose-template.yaml +0 -0
  13. {kalavai_client-0.6.19 → kalavai_client-0.6.20}/kalavai_client/assets/nginx.conf +0 -0
  14. {kalavai_client-0.6.19 → kalavai_client-0.6.20}/kalavai_client/assets/pool_config_template.yaml +0 -0
  15. {kalavai_client-0.6.19 → kalavai_client-0.6.20}/kalavai_client/assets/pool_config_values.yaml +0 -0
  16. {kalavai_client-0.6.19 → kalavai_client-0.6.20}/kalavai_client/assets/user_workspace.yaml +0 -0
  17. {kalavai_client-0.6.19 → kalavai_client-0.6.20}/kalavai_client/assets/user_workspace_values.yaml +0 -0
  18. {kalavai_client-0.6.19 → kalavai_client-0.6.20}/kalavai_client/auth.py +0 -0
  19. {kalavai_client-0.6.19 → kalavai_client-0.6.20}/kalavai_client/bridge_api.py +0 -0
  20. {kalavai_client-0.6.19 → kalavai_client-0.6.20}/kalavai_client/bridge_models.py +0 -0
  21. {kalavai_client-0.6.19 → kalavai_client-0.6.20}/kalavai_client/cli.py +0 -0
  22. {kalavai_client-0.6.19 → kalavai_client-0.6.20}/kalavai_client/cluster.py +0 -0
  23. {kalavai_client-0.6.19 → kalavai_client-0.6.20}/kalavai_client/core.py +0 -0
  24. {kalavai_client-0.6.19 → kalavai_client-0.6.20}/kalavai_client/env.py +0 -0
  25. {kalavai_client-0.6.19 → kalavai_client-0.6.20}/kalavai_client/utils.py +0 -0
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.3
2
2
  Name: kalavai-client
3
- Version: 0.6.19
3
+ Version: 0.6.20
4
4
  Summary: Client app for kalavai platform
5
5
  License: Apache-2.0
6
6
  Keywords: LLM,platform
@@ -50,7 +50,7 @@ Description-Content-Type: text/markdown
50
50
  ⭐⭐⭐ **Kalavai platform is open source, and free to use in both commercial and non-commercial purposes. If you find it useful, consider supporting us by [giving a star to our GitHub project](https://github.com/kalavai-net/kalavai-client), joining our [discord channel](https://discord.gg/YN6ThTJKbM) and follow our [Substack](https://kalavainet.substack.com/).**
51
51
 
52
52
 
53
- # Kalavai: turn your devices into a scalable AI platform
53
+ # Kalavai: a platform to self-host AI on easy mode
54
54
 
55
55
  > AI in the cloud is not aligned with you, it's aligned with the company that owns it. Make sure you own your AI
56
56
 
@@ -61,15 +61,14 @@ Kalavai is an **open source** tool that turns **any devices** into a self-hosted
61
61
 
62
62
  ## What can Kalavai do?
63
63
 
64
- Kalavai's goal is to make using self-hosted AI (GenAI models and agents) in real applications accessible and affordable to all. It's a tool that transforms machines into a _magic box_ that **integrates all the components required to make AI useful in the age of massive computing**, from model deployment and orchestration to Agentic AI.
64
+ Kalavai's goal is to make using self-hosted AI (GenAI models and agents) in real applications accessible and affordable to all.
65
65
 
66
66
  ### Core features
67
67
 
68
- - Manage **multiple devices resources as one**. One pool of RAM, CPUs and GPUs
69
- - **Deploy open source models seamlessly across devices**, wherever they are (cloud, on premises, personal devices)
70
- - Beyond LLMs: not just for large language models, but text-to-speech, speech-to-text, image understanding, coding generation and embedding models.
71
- - The hybrid dream: build on your laptop, move to the cloud (any!) with zero changes
72
- - Auto-discovery: all **models are automatically exposed** through a single OpenAI-like API and a ChatGPT-like UI playground
68
+ - Manage **multiple devices resources as one**, wherever they come from (hybrid cloud, on prem, personal devices)
69
+ - **Deploy open source models seamlessly across devices**, with zero-cost migration
70
+ - Beyond LLMs: not just for large language models, but text-to-speech, speech-to-text, image generation, video understanding, coding generation and embedding models.
71
+ - Production-ready: **models are automatically exposed** through a single OpenAI-like API and a ChatGPT-like UI playground, with off-the-shelf monitoring and evaluation framework.
73
72
  - Compatible with [most popular model engines](#support-for-llm-engines)
74
73
  - [Easy to expand](https://github.com/kalavai-net/kube-watcher/tree/main/templates) to custom workloads
75
74
 
@@ -103,17 +102,24 @@ Kalavai's goal is to make using self-hosted AI (GenAI models and agents) in real
103
102
 
104
103
  </details>
105
104
 
106
- ### Support for LLM engines
105
+ ### Support for AI engines
107
106
 
108
- We currently support out of the box the following LLM engines:
107
+ We currently support out of the box the following AI engines:
109
108
 
110
- - [vLLM](https://docs.vllm.ai/en/latest/)
111
- - [llama.cpp](https://github.com/ggerganov/llama.cpp)
112
- - [Aphrodite Engine](https://github.com/aphrodite-engine/aphrodite-engine)
113
- - [Petals](https://github.com/bigscience-workshop/petals)
109
+ - [vLLM](https://docs.vllm.ai/en/latest/): most popular GPU-based model inference.
110
+ - [llama.cpp](https://github.com/ggerganov/llama.cpp): CPU-based GGUF model inference.
111
+ - [SGLang](https://github.com/sgl-project/sglang): Super fast GPU-based model inference.
112
+ - [n8n](https://n8n.io/): no-code workload automation framework.
113
+ - [Flowise](https://flowiseai.com/): no-code agentic AI workload framework.
114
+ - [Speaches](https://speaches.ai/): audio (speech-to-text and text-to-speech) model inference.
115
+ - [Langfuse](https://langfuse.com/): open source evaluation and monitoring GenAI framework.
116
+ - [OpenWebUI](https://docs.openwebui.com/): ChatGPT-like UI playground to interface with any models.
114
117
 
115
118
  Coming soon:
116
119
 
120
+ - [diffusers](https://huggingface.co/docs/diffusers/en/index)
121
+ - [Aphrodite Engine](https://github.com/aphrodite-engine/aphrodite-engine)
122
+ - [Petals](https://github.com/bigscience-workshop/petals)
117
123
  - [exo](https://github.com/exo-explore/exo)
118
124
  - [GPUstack](https://docs.gpustack.ai/0.4/overview/)
119
125
  - [RayServe](https://docs.ray.io/en/latest/serve/index.html)
@@ -141,12 +147,16 @@ The `kalavai-client` is the main tool to interact with the Kalavai platform, to
141
147
 
142
148
  <summary>Requirements</summary>
143
149
 
150
+ For seed nodes:
151
+ - A 64 bits x86 based Linux machine (laptop, desktop or VM)
152
+ - [Docker engine installed](https://docs.docker.com/engine/install/ubuntu/) with [privilege access](https://docs.docker.com/engine/containers/run/#runtime-privilege-and-linux-capabilities).
153
+
144
154
  For workers sharing resources with the pool:
145
155
 
146
- - A laptop, desktop or Virtual Machine
156
+ - A laptop, desktop or Virtual Machine (MacOS, Linux or Windows; ARM or x86)
157
+ - If self-hosting, workers should be on the same network as the seed node. Looking for over-the-internet connectivity? Check out our [managed seeds](https://platform.kalavai.net)
147
158
  - Docker engine installed (for [linux](https://docs.docker.com/engine/install/ubuntu/), [Windows and MacOS](https://docs.docker.com/desktop/)) with [privilege access](https://docs.docker.com/engine/containers/run/#runtime-privilege-and-linux-capabilities).
148
159
 
149
- > **Support for Windows and MacOS workers is experimental**: kalavai workers run on docker containers that require access to the host network interfaces, thus systems that do not support containers natively (Windows and MacOS) may have difficulties finding each other.
150
160
 
151
161
  </details>
152
162
 
@@ -162,24 +172,24 @@ pip install kalavai-client
162
172
 
163
173
  ## Create a a local, private AI pool
164
174
 
165
- > Kalavai is **free to use, no caps, for both commercial and non-commercial purposes**. All you need to get started is one or more computers that can see each other (i.e. within the same network), and you are good to go. If you are interested in join computers in different locations / networks, [contact us](mailto:info@kalavai.net) or [book a demo](https://app.onecal.io/b/kalavai/book-a-demo) with the founders.
166
-
167
175
  You can create and manage your pools with the new kalavai GUI, which can be started with:
168
176
 
169
177
  ```bash
170
178
  kalavai gui start
171
179
  ```
172
180
 
173
- This will expose the GUI and the backend services in localhost. By default, the GUI is accessible via [http://localhost:3000](http://localhost:3000). In the UI users can create and join LLM pools, monitor devices, deploy LLMs and more.
181
+ This will expose the GUI and the backend services in localhost. By default, the GUI is accessible via [http://localhost:49153](http://localhost:49153). In the UI users can create and join AI pools, monitor devices, deploy LLMs and more.
174
182
 
175
183
  ![Kalavai logo](docs/docs/assets/images/ui_dashboard_multiple.png)
176
184
 
177
- Check out our [getting started guide](https://kalavai-net.github.io/kalavai-client/getting_started/) for next steps.
185
+ Check out our [getting started guide](https://kalavai-net.github.io/kalavai-client/getting_started/) for next steps on how to add more workers to your pool, or use our [managed seeds service](https://kalavai-net.github.io/kalavai-client/getting_started/#1b-managed-pools-create-a-seed) for over-the-internet AI pools.
178
186
 
179
187
 
180
188
  ## Enough already, let's run stuff!
181
189
 
182
- Check our [examples](examples/) to put your new AI pool to good use! For an end to end tour, check our [self-hosted](https://kalavai-net.github.io/kalavai-client/self_hosted_llm_pool/) and [public LLM pools](https://kalavai-net.github.io/kalavai-client/public_llm_pool/) guides.
190
+ For an end to end tour on building your own OpenAI-like service, check our [self-hosted](https://kalavai-net.github.io/kalavai-client/self_hosted_llm_pool/) guide.
191
+
192
+ Check our [examples](examples/) to put your new AI pool to good use!
183
193
 
184
194
 
185
195
  ## Compatibility matrix
@@ -247,7 +257,7 @@ Anything missing here? Give us a shout in the [discussion board](https://github.
247
257
 
248
258
  <summary>Expand</summary>
249
259
 
250
- Python version >= 3.6.
260
+ Python version >= 3.10.
251
261
 
252
262
  ```bash
253
263
  sudo add-apt-repository ppa:deadsnakes/ppa
@@ -10,7 +10,7 @@
10
10
  ⭐⭐⭐ **Kalavai platform is open source, and free to use in both commercial and non-commercial purposes. If you find it useful, consider supporting us by [giving a star to our GitHub project](https://github.com/kalavai-net/kalavai-client), joining our [discord channel](https://discord.gg/YN6ThTJKbM) and follow our [Substack](https://kalavainet.substack.com/).**
11
11
 
12
12
 
13
- # Kalavai: turn your devices into a scalable AI platform
13
+ # Kalavai: a platform to self-host AI on easy mode
14
14
 
15
15
  > AI in the cloud is not aligned with you, it's aligned with the company that owns it. Make sure you own your AI
16
16
 
@@ -21,15 +21,14 @@ Kalavai is an **open source** tool that turns **any devices** into a self-hosted
21
21
 
22
22
  ## What can Kalavai do?
23
23
 
24
- Kalavai's goal is to make using self-hosted AI (GenAI models and agents) in real applications accessible and affordable to all. It's a tool that transforms machines into a _magic box_ that **integrates all the components required to make AI useful in the age of massive computing**, from model deployment and orchestration to Agentic AI.
24
+ Kalavai's goal is to make using self-hosted AI (GenAI models and agents) in real applications accessible and affordable to all.
25
25
 
26
26
  ### Core features
27
27
 
28
- - Manage **multiple devices resources as one**. One pool of RAM, CPUs and GPUs
29
- - **Deploy open source models seamlessly across devices**, wherever they are (cloud, on premises, personal devices)
30
- - Beyond LLMs: not just for large language models, but text-to-speech, speech-to-text, image understanding, coding generation and embedding models.
31
- - The hybrid dream: build on your laptop, move to the cloud (any!) with zero changes
32
- - Auto-discovery: all **models are automatically exposed** through a single OpenAI-like API and a ChatGPT-like UI playground
28
+ - Manage **multiple devices resources as one**, wherever they come from (hybrid cloud, on prem, personal devices)
29
+ - **Deploy open source models seamlessly across devices**, with zero-cost migration
30
+ - Beyond LLMs: not just for large language models, but text-to-speech, speech-to-text, image generation, video understanding, coding generation and embedding models.
31
+ - Production-ready: **models are automatically exposed** through a single OpenAI-like API and a ChatGPT-like UI playground, with off-the-shelf monitoring and evaluation framework.
33
32
  - Compatible with [most popular model engines](#support-for-llm-engines)
34
33
  - [Easy to expand](https://github.com/kalavai-net/kube-watcher/tree/main/templates) to custom workloads
35
34
 
@@ -63,17 +62,24 @@ Kalavai's goal is to make using self-hosted AI (GenAI models and agents) in real
63
62
 
64
63
  </details>
65
64
 
66
- ### Support for LLM engines
65
+ ### Support for AI engines
67
66
 
68
- We currently support out of the box the following LLM engines:
67
+ We currently support out of the box the following AI engines:
69
68
 
70
- - [vLLM](https://docs.vllm.ai/en/latest/)
71
- - [llama.cpp](https://github.com/ggerganov/llama.cpp)
72
- - [Aphrodite Engine](https://github.com/aphrodite-engine/aphrodite-engine)
73
- - [Petals](https://github.com/bigscience-workshop/petals)
69
+ - [vLLM](https://docs.vllm.ai/en/latest/): most popular GPU-based model inference.
70
+ - [llama.cpp](https://github.com/ggerganov/llama.cpp): CPU-based GGUF model inference.
71
+ - [SGLang](https://github.com/sgl-project/sglang): Super fast GPU-based model inference.
72
+ - [n8n](https://n8n.io/): no-code workload automation framework.
73
+ - [Flowise](https://flowiseai.com/): no-code agentic AI workload framework.
74
+ - [Speaches](https://speaches.ai/): audio (speech-to-text and text-to-speech) model inference.
75
+ - [Langfuse](https://langfuse.com/): open source evaluation and monitoring GenAI framework.
76
+ - [OpenWebUI](https://docs.openwebui.com/): ChatGPT-like UI playground to interface with any models.
74
77
 
75
78
  Coming soon:
76
79
 
80
+ - [diffusers](https://huggingface.co/docs/diffusers/en/index)
81
+ - [Aphrodite Engine](https://github.com/aphrodite-engine/aphrodite-engine)
82
+ - [Petals](https://github.com/bigscience-workshop/petals)
77
83
  - [exo](https://github.com/exo-explore/exo)
78
84
  - [GPUstack](https://docs.gpustack.ai/0.4/overview/)
79
85
  - [RayServe](https://docs.ray.io/en/latest/serve/index.html)
@@ -101,12 +107,16 @@ The `kalavai-client` is the main tool to interact with the Kalavai platform, to
101
107
 
102
108
  <summary>Requirements</summary>
103
109
 
110
+ For seed nodes:
111
+ - A 64 bits x86 based Linux machine (laptop, desktop or VM)
112
+ - [Docker engine installed](https://docs.docker.com/engine/install/ubuntu/) with [privilege access](https://docs.docker.com/engine/containers/run/#runtime-privilege-and-linux-capabilities).
113
+
104
114
  For workers sharing resources with the pool:
105
115
 
106
- - A laptop, desktop or Virtual Machine
116
+ - A laptop, desktop or Virtual Machine (MacOS, Linux or Windows; ARM or x86)
117
+ - If self-hosting, workers should be on the same network as the seed node. Looking for over-the-internet connectivity? Check out our [managed seeds](https://platform.kalavai.net)
107
118
  - Docker engine installed (for [linux](https://docs.docker.com/engine/install/ubuntu/), [Windows and MacOS](https://docs.docker.com/desktop/)) with [privilege access](https://docs.docker.com/engine/containers/run/#runtime-privilege-and-linux-capabilities).
108
119
 
109
- > **Support for Windows and MacOS workers is experimental**: kalavai workers run on docker containers that require access to the host network interfaces, thus systems that do not support containers natively (Windows and MacOS) may have difficulties finding each other.
110
120
 
111
121
  </details>
112
122
 
@@ -122,24 +132,24 @@ pip install kalavai-client
122
132
 
123
133
  ## Create a a local, private AI pool
124
134
 
125
- > Kalavai is **free to use, no caps, for both commercial and non-commercial purposes**. All you need to get started is one or more computers that can see each other (i.e. within the same network), and you are good to go. If you are interested in join computers in different locations / networks, [contact us](mailto:info@kalavai.net) or [book a demo](https://app.onecal.io/b/kalavai/book-a-demo) with the founders.
126
-
127
135
  You can create and manage your pools with the new kalavai GUI, which can be started with:
128
136
 
129
137
  ```bash
130
138
  kalavai gui start
131
139
  ```
132
140
 
133
- This will expose the GUI and the backend services in localhost. By default, the GUI is accessible via [http://localhost:3000](http://localhost:3000). In the UI users can create and join LLM pools, monitor devices, deploy LLMs and more.
141
+ This will expose the GUI and the backend services in localhost. By default, the GUI is accessible via [http://localhost:49153](http://localhost:49153). In the UI users can create and join AI pools, monitor devices, deploy LLMs and more.
134
142
 
135
143
  ![Kalavai logo](docs/docs/assets/images/ui_dashboard_multiple.png)
136
144
 
137
- Check out our [getting started guide](https://kalavai-net.github.io/kalavai-client/getting_started/) for next steps.
145
+ Check out our [getting started guide](https://kalavai-net.github.io/kalavai-client/getting_started/) for next steps on how to add more workers to your pool, or use our [managed seeds service](https://kalavai-net.github.io/kalavai-client/getting_started/#1b-managed-pools-create-a-seed) for over-the-internet AI pools.
138
146
 
139
147
 
140
148
  ## Enough already, let's run stuff!
141
149
 
142
- Check our [examples](examples/) to put your new AI pool to good use! For an end to end tour, check our [self-hosted](https://kalavai-net.github.io/kalavai-client/self_hosted_llm_pool/) and [public LLM pools](https://kalavai-net.github.io/kalavai-client/public_llm_pool/) guides.
150
+ For an end to end tour on building your own OpenAI-like service, check our [self-hosted](https://kalavai-net.github.io/kalavai-client/self_hosted_llm_pool/) guide.
151
+
152
+ Check our [examples](examples/) to put your new AI pool to good use!
143
153
 
144
154
 
145
155
  ## Compatibility matrix
@@ -207,7 +217,7 @@ Anything missing here? Give us a shout in the [discussion board](https://github.
207
217
 
208
218
  <summary>Expand</summary>
209
219
 
210
- Python version >= 3.6.
220
+ Python version >= 3.10.
211
221
 
212
222
  ```bash
213
223
  sudo add-apt-repository ppa:deadsnakes/ppa
@@ -0,0 +1,2 @@
1
+
2
+ __version__ = "0.6.20"
@@ -154,7 +154,7 @@ releases:
154
154
  - name: replicas
155
155
  value: 1
156
156
  - name: image_tag
157
- value: "v2025.07.31"
157
+ value: "v2025.07.33"
158
158
  - name: deployment.in_cluster
159
159
  value: "True"
160
160
  - name: deployment.kalavai_username_key
@@ -1,6 +1,6 @@
1
1
  [project]
2
2
  name = "kalavai-client"
3
- version = "0.6.19"
3
+ version = "0.6.20"
4
4
  authors = [
5
5
  {name = "Carlos Fernandez Musoles", email = "carlos@kalavai.net"}
6
6
  ]
@@ -1,2 +0,0 @@
1
-
2
- __version__ = "0.6.19"
File without changes