kalavai-client 0.5.8__tar.gz → 0.5.10__tar.gz

Sign up to get free protection for your applications and to get access to all the features.
Files changed (20) hide show
  1. {kalavai_client-0.5.8 → kalavai_client-0.5.10}/PKG-INFO +26 -10
  2. {kalavai_client-0.5.8 → kalavai_client-0.5.10}/README.md +22 -6
  3. kalavai_client-0.5.10/kalavai_client/__init__.py +2 -0
  4. {kalavai_client-0.5.8 → kalavai_client-0.5.10}/kalavai_client/assets/apps.yaml +8 -8
  5. {kalavai_client-0.5.8 → kalavai_client-0.5.10}/kalavai_client/assets/docker-compose-template.yaml +25 -16
  6. {kalavai_client-0.5.8 → kalavai_client-0.5.10}/kalavai_client/cli.py +53 -37
  7. {kalavai_client-0.5.8 → kalavai_client-0.5.10}/kalavai_client/utils.py +1 -1
  8. {kalavai_client-0.5.8 → kalavai_client-0.5.10}/pyproject.toml +2 -2
  9. kalavai_client-0.5.8/kalavai_client/__init__.py +0 -2
  10. {kalavai_client-0.5.8 → kalavai_client-0.5.10}/LICENSE +0 -0
  11. {kalavai_client-0.5.8 → kalavai_client-0.5.10}/kalavai_client/__main__.py +0 -0
  12. {kalavai_client-0.5.8 → kalavai_client-0.5.10}/kalavai_client/assets/__init__.py +0 -0
  13. {kalavai_client-0.5.8 → kalavai_client-0.5.10}/kalavai_client/assets/apps_values.yaml +0 -0
  14. {kalavai_client-0.5.8 → kalavai_client-0.5.10}/kalavai_client/assets/nginx.conf +0 -0
  15. {kalavai_client-0.5.8 → kalavai_client-0.5.10}/kalavai_client/assets/pool_config_template.yaml +0 -0
  16. {kalavai_client-0.5.8 → kalavai_client-0.5.10}/kalavai_client/assets/pool_config_values.yaml +0 -0
  17. {kalavai_client-0.5.8 → kalavai_client-0.5.10}/kalavai_client/assets/user_workspace.yaml +0 -0
  18. {kalavai_client-0.5.8 → kalavai_client-0.5.10}/kalavai_client/assets/user_workspace_values.yaml +0 -0
  19. {kalavai_client-0.5.8 → kalavai_client-0.5.10}/kalavai_client/auth.py +0 -0
  20. {kalavai_client-0.5.8 → kalavai_client-0.5.10}/kalavai_client/cluster.py +0 -0
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.3
2
2
  Name: kalavai-client
3
- Version: 0.5.8
3
+ Version: 0.5.10
4
4
  Summary: Client app for kalavai platform
5
5
  License: Apache-2.0
6
6
  Keywords: LLM,platform
@@ -8,10 +8,8 @@ Author: Carlos Fernandez Musoles
8
8
  Author-email: carlos@kalavai.net
9
9
  Maintainer: Carlos Fernandez Musoles
10
10
  Maintainer-email: carlos@kalavai.net
11
- Requires-Python: <3.12
11
+ Requires-Python: >=3.4
12
12
  Classifier: License :: OSI Approved :: Apache Software License
13
- Classifier: Programming Language :: Python :: 2
14
- Classifier: Programming Language :: Python :: 2.7
15
13
  Classifier: Programming Language :: Python :: 3
16
14
  Classifier: Programming Language :: Python :: 3.4
17
15
  Classifier: Programming Language :: Python :: 3.5
@@ -21,6 +19,8 @@ Classifier: Programming Language :: Python :: 3.8
21
19
  Classifier: Programming Language :: Python :: 3.9
22
20
  Classifier: Programming Language :: Python :: 3.10
23
21
  Classifier: Programming Language :: Python :: 3.11
22
+ Classifier: Programming Language :: Python :: 3.12
23
+ Classifier: Programming Language :: Python :: 3.13
24
24
  Provides-Extra: dev
25
25
  Requires-Dist: Pillow (==10.3.0)
26
26
  Requires-Dist: anvil-uplink (==0.5.1)
@@ -89,6 +89,7 @@ https://github.com/user-attachments/assets/0d2316f3-79ea-46ac-b41e-8ef720f52672
89
89
 
90
90
  ### News updates
91
91
 
92
+ - 31 January 2025: `kalavai-client` is now a [PyPI package](https://pypi.org/project/kalavai-client/), easier to install than ever!
92
93
  - 27 January 2025: Support for accessing pools from remote computers
93
94
  - 9 January 2025: Added support for [Aphrodite Engine](https://github.com/aphrodite-engine/aphrodite-engine) models
94
95
  - 8 January 2025: Release of [a free, public, shared pool](/docs/docs/public_llm_pool.md) for community LLM deployment
@@ -129,20 +130,24 @@ Not what you were looking for? [Tell us](https://github.com/kalavai-net/kalavai-
129
130
 
130
131
  ## Getting started
131
132
 
132
- The `kalavai` client is the main tool to interact with the Kalavai platform, to create and manage both local and public pools and also to interact with them (e.g. deploy models). Let's go over its installation.
133
+ The `kalavai-client` is the main tool to interact with the Kalavai platform, to create and manage both local and public pools and also to interact with them (e.g. deploy models). Let's go over its installation.
133
134
 
134
- From release **v0.5.0, you can now install `kalavai` client in non-worker computers**. You can run a pool on a set of machines and have the client on a remote computer from which you access the LLM pool. Because the client only requires having python installed, this means more computers are now supported to run it.
135
+ From release **v0.5.0, you can now install `kalavai-client` in non-worker computers**. You can run a pool on a set of machines and have the client on a remote computer from which you access the LLM pool. Because the client only requires having python installed, this means more computers are now supported to run it.
135
136
 
136
137
 
137
- ### Requirements for a worker machine
138
+ ### Requirements
139
+
140
+ For workers sharing resources with the pool:
138
141
 
139
142
  - A laptop, desktop or Virtual Machine
140
143
  - Docker engine installed (for [linux](https://docs.docker.com/engine/install/), [Windows and MacOS](https://docs.docker.com/desktop/)) with [privilege access](https://docs.docker.com/engine/containers/run/#runtime-privilege-and-linux-capabilities).
141
144
 
145
+ > **Support for Windows and MacOS workers is experimental**: kalavai workers run on docker containers that require access to the host network interfaces, thus systems that do not support containers natively (Windows and MacOS) may have difficulties finding each other.
146
+
147
+ Any system that runs python 3.6+ is able to run the `kalavai-client` and therefore connect and operate an LLM pool, [without sharing with the pool](). Your computer won't be adding its capacity to the pool, but it wil be able to deploy jobs and interact with models.
142
148
 
143
- ### Requirements to run the client
144
149
 
145
- - Python 3.10+
150
+ #### Common issues
146
151
 
147
152
  If you see the following error:
148
153
 
@@ -211,6 +216,17 @@ Copy the joining token. On the worker node, run:
211
216
  kalavai pool join <token>
212
217
  ```
213
218
 
219
+ ### 3. Attach more clients
220
+
221
+ You can now connect to an existing pool from any computer -not just from worker nodes. To connect to a pool, run:
222
+
223
+ ```bash
224
+ kalavai pool attach <token>
225
+ ```
226
+
227
+ This won't add the machine as a worker, but you will be able to operate in the pool as if you were. This is ideal for remote access to the pool, and to use the pool from machines that cannot run workers (docker container limitations).
228
+
229
+
214
230
  ### Enough already, let's run stuff!
215
231
 
216
232
  Check our [examples](examples/) to put your new AI pool to good use!
@@ -275,7 +291,7 @@ Anything missing here? Give us a shout in the [discussion board](https://github.
275
291
 
276
292
  ### Requirements
277
293
 
278
- Python version <= 3.12.
294
+ Python version >= 3.6.
279
295
 
280
296
  ```bash
281
297
  sudo add-apt-repository ppa:deadsnakes/ppa
@@ -46,6 +46,7 @@ https://github.com/user-attachments/assets/0d2316f3-79ea-46ac-b41e-8ef720f52672
46
46
 
47
47
  ### News updates
48
48
 
49
+ - 31 January 2025: `kalavai-client` is now a [PyPI package](https://pypi.org/project/kalavai-client/), easier to install than ever!
49
50
  - 27 January 2025: Support for accessing pools from remote computers
50
51
  - 9 January 2025: Added support for [Aphrodite Engine](https://github.com/aphrodite-engine/aphrodite-engine) models
51
52
  - 8 January 2025: Release of [a free, public, shared pool](/docs/docs/public_llm_pool.md) for community LLM deployment
@@ -86,20 +87,24 @@ Not what you were looking for? [Tell us](https://github.com/kalavai-net/kalavai-
86
87
 
87
88
  ## Getting started
88
89
 
89
- The `kalavai` client is the main tool to interact with the Kalavai platform, to create and manage both local and public pools and also to interact with them (e.g. deploy models). Let's go over its installation.
90
+ The `kalavai-client` is the main tool to interact with the Kalavai platform, to create and manage both local and public pools and also to interact with them (e.g. deploy models). Let's go over its installation.
90
91
 
91
- From release **v0.5.0, you can now install `kalavai` client in non-worker computers**. You can run a pool on a set of machines and have the client on a remote computer from which you access the LLM pool. Because the client only requires having python installed, this means more computers are now supported to run it.
92
+ From release **v0.5.0, you can now install `kalavai-client` in non-worker computers**. You can run a pool on a set of machines and have the client on a remote computer from which you access the LLM pool. Because the client only requires having python installed, this means more computers are now supported to run it.
92
93
 
93
94
 
94
- ### Requirements for a worker machine
95
+ ### Requirements
96
+
97
+ For workers sharing resources with the pool:
95
98
 
96
99
  - A laptop, desktop or Virtual Machine
97
100
  - Docker engine installed (for [linux](https://docs.docker.com/engine/install/), [Windows and MacOS](https://docs.docker.com/desktop/)) with [privilege access](https://docs.docker.com/engine/containers/run/#runtime-privilege-and-linux-capabilities).
98
101
 
102
+ > **Support for Windows and MacOS workers is experimental**: kalavai workers run on docker containers that require access to the host network interfaces, thus systems that do not support containers natively (Windows and MacOS) may have difficulties finding each other.
103
+
104
+ Any system that runs python 3.6+ is able to run the `kalavai-client` and therefore connect and operate an LLM pool, [without sharing with the pool](). Your computer won't be adding its capacity to the pool, but it wil be able to deploy jobs and interact with models.
99
105
 
100
- ### Requirements to run the client
101
106
 
102
- - Python 3.10+
107
+ #### Common issues
103
108
 
104
109
  If you see the following error:
105
110
 
@@ -168,6 +173,17 @@ Copy the joining token. On the worker node, run:
168
173
  kalavai pool join <token>
169
174
  ```
170
175
 
176
+ ### 3. Attach more clients
177
+
178
+ You can now connect to an existing pool from any computer -not just from worker nodes. To connect to a pool, run:
179
+
180
+ ```bash
181
+ kalavai pool attach <token>
182
+ ```
183
+
184
+ This won't add the machine as a worker, but you will be able to operate in the pool as if you were. This is ideal for remote access to the pool, and to use the pool from machines that cannot run workers (docker container limitations).
185
+
186
+
171
187
  ### Enough already, let's run stuff!
172
188
 
173
189
  Check our [examples](examples/) to put your new AI pool to good use!
@@ -232,7 +248,7 @@ Anything missing here? Give us a shout in the [discussion board](https://github.
232
248
 
233
249
  ### Requirements
234
250
 
235
- Python version <= 3.12.
251
+ Python version >= 3.6.
236
252
 
237
253
  ```bash
238
254
  sudo add-apt-repository ppa:deadsnakes/ppa
@@ -0,0 +1,2 @@
1
+
2
+ __version__ = "0.5.10"
@@ -189,13 +189,13 @@ releases:
189
189
  value: "1"
190
190
  - name: devicePlugin.deviceSplitCount
191
191
  value: "1"
192
- - name: scheduler.customWebhook.port
193
- value: "30498"
194
- - name: scheduler.service.schedulerPort
195
- value: "30498"
196
- - name: scheduler.service.monitorPort
197
- value: "30493"
198
- - name: devicePlugin.service.httpPort
199
- value: "30492"
192
+ # - name: scheduler.customWebhook.port
193
+ # value: "30498"
194
+ # - name: scheduler.service.schedulerPort
195
+ # value: "30498"
196
+ # - name: scheduler.service.monitorPort
197
+ # value: "30493"
198
+ # - name: devicePlugin.service.httpPort
199
+ # value: "30492"
200
200
 
201
201
 
@@ -14,14 +14,13 @@ services:
14
14
  # - "6443:6443" # kube server
15
15
  # - "10250:10250" # worker balancer
16
16
  # - "8472:8472/udp" # flannel vxlan
17
- # - "51820:51820/udp" # flannel wireguard
17
+ # - "51820-51830:51820-51830" # flannel wireguard
18
18
  # {% if command == "server" %}
19
19
  # - "30000-30500:30000-30500"
20
20
  # {% endif %}
21
21
  environment:
22
22
  - HOST_NAME={{node_name}}
23
23
  - IFACE_NAME={{flannel_iface}}
24
- - PORT=51820
25
24
  - TOKEN={{vpn_token}}
26
25
  volumes:
27
26
  - /dev/net/tun:/dev/net/tun
@@ -36,6 +35,9 @@ services:
36
35
  # volumes:
37
36
  # - {{nginx_path}}/nginx.conf:/etc/nginx/nginx.conf
38
37
  {% endif %}
38
+
39
+ # run worker only if command is set
40
+ {%if command %}
39
41
  {{service_name}}:
40
42
  image: docker.io/bundenth/kalavai-runner:gpu-latest
41
43
  container_name: {{service_name}}
@@ -44,24 +46,29 @@ services:
44
46
  - {{vpn_name}}
45
47
  network_mode: "service:{{vpn_name}}"
46
48
  {% else %}
47
- hostname: {{node_name}}
48
- networks:
49
- - custom-network
50
- ports:
51
- - "6443:6443" # kube server
52
- - "10250:10250" # worker balancer
53
- - "8472:8472" # flannel vxlan
54
- - "51820:51820" # flannel wireguard
55
- {% if command == "server" %}
56
- - "30000-30500:30000-30500"
57
- {% endif %}
49
+ network_mode: host
50
+ # hostname: {{node_name}}
51
+ # networks:
52
+ # - custom-network
53
+ # ports:
54
+ # - "6443:6443" # kube server
55
+ # - "2379-2380:2379-2380" # etcd server
56
+ # - "10259:10259" # kube scheduler
57
+ # - "10257:10257" # kube controller manager
58
+ # - "10250:10250" # worker balancer
59
+ # - "8285:8285" # flannel
60
+ # - "8472:8472" # flannel vxlan
61
+ # - "51820:51820" # flannel wireguard
62
+ # {% if command == "server" %}
63
+ # - "30000-32767:30000-32767"
64
+ # {% endif %}
58
65
  {% endif %}
59
66
  privileged: true
60
67
  restart: unless-stopped
61
68
  command: >
62
69
  --command={{command}}
63
70
  {% if command == "server" %}
64
- --port_range="30000-30500"
71
+ --port_range="30000-32767"
65
72
  {% else %}
66
73
  --server_ip={{pool_ip}}
67
74
  --token={{pool_token}}
@@ -82,7 +89,7 @@ services:
82
89
  - {{k3s_path}}:/var/lib/rancher/k3s # Persist data
83
90
  - {{etc_path}}:/etc/rancher/k3s # Config files
84
91
 
85
- {% if num_gpus and num_gpus > 0 %}
92
+ {% if num_gpus and num_gpus > 0 %}
86
93
  deploy:
87
94
  resources:
88
95
  reservations:
@@ -90,8 +97,10 @@ services:
90
97
  - driver: nvidia
91
98
  count: {{num_gpus}}
92
99
  capabilities: [gpu]
100
+ {% endif %}
93
101
  {% endif %}
94
102
 
95
103
  networks:
96
104
  custom-network:
97
- driver: bridge
105
+ driver: bridge
106
+
@@ -118,27 +118,6 @@ CLUSTER = dockerCluster(
118
118
  ######################
119
119
  ## HELPER FUNCTIONS ##
120
120
  ######################
121
-
122
- def check_vpn_compatibility():
123
- """Check required packages to join VPN"""
124
- logs = []
125
- console.log("[white]Checking system requirements...")
126
- # netclient
127
- try:
128
- run_cmd("sudo netclient version >/dev/null 2>&1")
129
- except:
130
- logs.append("[red]Netmaker not installed. Install instructions:\n")
131
- logs.append(" Linux: https://docs.netmaker.io/docs/netclient#linux\n")
132
- logs.append(" Windows: https://docs.netmaker.io/docs/netclient#windows\n")
133
- logs.append(" MacOS: https://docs.netmaker.io/docs/netclient#mac\n")
134
-
135
- if len(logs) == 0:
136
- console.log("[green]System is ready to join the vpn")
137
- return True
138
- else:
139
- for log in logs:
140
- console.log(log)
141
- return False
142
121
 
143
122
  def check_seed_compatibility():
144
123
  """Check required packages to start pools"""
@@ -358,7 +337,7 @@ def select_token_type():
358
337
  break
359
338
  return {"admin": choice == 0, "user": choice == 1, "worker": choice == 2}
360
339
 
361
- def generate_compose_config(role, node_name, node_labels, is_public, pool_ip=None, vpn_token=None, pool_token=None):
340
+ def generate_compose_config(role, node_name, is_public, node_labels=None, pool_ip=None, vpn_token=None, pool_token=None):
362
341
  num_gpus = 0
363
342
  try:
364
343
  has_gpus = check_gpu_drivers()
@@ -370,6 +349,8 @@ def generate_compose_config(role, node_name, node_labels, is_public, pool_ip=Non
370
349
  )
371
350
  except:
372
351
  console.log(f"[red]WARNING: error when fetching NVIDIA GPU info. GPUs will not be used on this local machine")
352
+ if node_labels is not None:
353
+ node_labels = " ".join([f"--node-label {key}={value}" for key, value in node_labels.items()])
373
354
  compose_values = {
374
355
  "user_path": user_path(""),
375
356
  "service_name": DEFAULT_CONTAINER_NAME,
@@ -384,7 +365,7 @@ def generate_compose_config(role, node_name, node_labels, is_public, pool_ip=Non
384
365
  "num_gpus": num_gpus,
385
366
  "k3s_path": f"{CONTAINER_HOST_PATH}/k3s",
386
367
  "etc_path": f"{CONTAINER_HOST_PATH}/etc",
387
- "node_labels": " ".join([f"--node-label {key}={value}" for key, value in node_labels.items()]),
368
+ "node_labels": node_labels,
388
369
  "flannel_iface": DEFAULT_FLANNEL_IFACE if is_public else ""
389
370
  }
390
371
  # generate local config files
@@ -1121,6 +1102,7 @@ def pool__attach(token, *others, node_name=None):
1121
1102
  """
1122
1103
  Set creds in token on the local instance
1123
1104
  """
1105
+ # check that is not attached to another instance
1124
1106
  if os.path.exists(USER_LOCAL_SERVER_FILE):
1125
1107
  option = user_confirm(
1126
1108
  question="You seem to be connected to an instance already. Are you sure you want to join a new one?",
@@ -1129,34 +1111,39 @@ def pool__attach(token, *others, node_name=None):
1129
1111
  if option == 0:
1130
1112
  console.log("[green]Nothing happened.")
1131
1113
  return
1114
+
1115
+ # check token
1116
+ if not pool__check_token(token):
1117
+ return
1118
+
1132
1119
  try:
1133
1120
  data = decode_dict(token)
1134
1121
  kalavai_seed_ip = data[CLUSTER_IP_KEY]
1135
- kalavai_token = data[CLUSTER_TOKEN_KEY]
1136
1122
  cluster_name = data[CLUSTER_NAME_KEY]
1137
1123
  auth_key = data[AUTH_KEY]
1138
1124
  watcher_service = data[WATCHER_SERVICE_KEY]
1139
1125
  public_location = data[PUBLIC_LOCATION_KEY]
1140
- except:
1141
- console.log("[red]Error when parsing token. Invalid token")
1126
+ vpn = defaultdict(lambda: None)
1127
+ except Exception as e:
1128
+ console.log(str(e))
1129
+ console.log("[red] Invalid token")
1142
1130
  return
1143
-
1131
+
1144
1132
  user = defaultdict(lambda: None)
1145
1133
  if public_location is not None:
1146
- console.log("Joining private network")
1134
+ user = user_login(user_cookie=USER_COOKIE)
1135
+ if user is None:
1136
+ console.log("[red]Must be logged in to join public pools. Run [yellow]kalavai login[red] to authenticate")
1137
+ exit()
1138
+ console.log("Fetching VPN credentials")
1147
1139
  try:
1148
- if not check_vpn_compatibility():
1149
- return
1150
- vpn = join_vpn(
1140
+ vpn = get_vpn_details(
1151
1141
  location=public_location,
1152
1142
  user_cookie=USER_COOKIE)
1153
- user = user_login(user_cookie=USER_COOKIE)
1154
- time.sleep(5)
1155
1143
  except Exception as e:
1156
1144
  console.log(f"[red]Error when joining network: {str(e)}")
1157
1145
  console.log("Are you authenticated? Try [yellow]kalavai login")
1158
1146
  return
1159
- # validate public seed
1160
1147
  try:
1161
1148
  validate_join_public_seed(
1162
1149
  cluster_name=cluster_name,
@@ -1165,9 +1152,19 @@ def pool__attach(token, *others, node_name=None):
1165
1152
  )
1166
1153
  except Exception as e:
1167
1154
  console.log(f"[red]Error when joining network: {str(e)}")
1168
- leave_vpn(container_name=DEFAULT_VPN_CONTAINER_NAME)
1169
1155
  return
1170
-
1156
+
1157
+ # local agent join
1158
+ # 1. Generate local cache files
1159
+ console.log("Generating config files...")
1160
+
1161
+ # Generate docker compose recipe
1162
+ generate_compose_config(
1163
+ role="",
1164
+ vpn_token=vpn["key"],
1165
+ node_name=node_name,
1166
+ is_public=public_location is not None)
1167
+
1171
1168
  store_server_info(
1172
1169
  server_ip=kalavai_seed_ip,
1173
1170
  auth_key=auth_key,
@@ -1178,7 +1175,26 @@ def pool__attach(token, *others, node_name=None):
1178
1175
  public_location=public_location,
1179
1176
  user_api_key=user["api_key"])
1180
1177
 
1181
- console.log(f"[green]You are now connected to {cluster_name} @ {kalavai_seed_ip}")
1178
+ option = user_confirm(
1179
+ question="Docker compose ready. Would you like Kalavai to deploy it?",
1180
+ options=["no", "yes"]
1181
+ )
1182
+ if option == 0:
1183
+ console.log("Manually deploy the worker with the following command:\n")
1184
+ print(f"docker compose -f {USER_COMPOSE_FILE} up -d")
1185
+ return
1186
+
1187
+ console.log(f"[white] Connecting to {cluster_name} @ {kalavai_seed_ip} (this may take a few minutes)...")
1188
+ run_cmd(f"docker compose -f {USER_COMPOSE_FILE} up -d")
1189
+ # ensure we are connected
1190
+ while True:
1191
+ console.log("Waiting for core services to be ready, may take a few minutes...")
1192
+ time.sleep(30)
1193
+ if is_watcher_alive(server_creds=USER_LOCAL_SERVER_FILE, user_cookie=USER_COOKIE):
1194
+ break
1195
+
1196
+ # set status to schedulable
1197
+ console.log(f"[green] You are connected to {cluster_name}")
1182
1198
 
1183
1199
 
1184
1200
  @arguably.command
@@ -389,7 +389,7 @@ def resource_path(relative_path: str):
389
389
  last_slash = relative_path.rfind("/")
390
390
  path = relative_path[:last_slash].replace("/", ".")
391
391
  filename = relative_path[last_slash+1:]
392
- resource = importlib.resources.path(path, filename)
392
+ resource = str(importlib.resources.files(path).joinpath(filename))
393
393
  except Exception as e:
394
394
  return None
395
395
  return resource
@@ -1,6 +1,6 @@
1
1
  [project]
2
2
  name = "kalavai-client"
3
- version = "0.5.8"
3
+ version = "0.5.10"
4
4
  authors = [
5
5
  {name = "Carlos Fernandez Musoles", email = "carlos@kalavai.net"}
6
6
  ]
@@ -12,7 +12,7 @@ license = "Apache-2.0"
12
12
  license-files = ["LICENSE"]
13
13
  keywords = ["LLM", "platform"]
14
14
  readme = {file = "README.md", content-type = "text/markdown"}
15
- requires-python = "<3.12"
15
+ requires-python = ">=3.4"
16
16
  dependencies = [
17
17
  "requests>= 2.25",
18
18
  "psutil==5.9.8",
@@ -1,2 +0,0 @@
1
-
2
- __version__ = "0.5.8"
File without changes