mimiry-cli 0.1.1__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2026 Mimiry
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
@@ -0,0 +1,2 @@
1
+ include README.md
2
+ include LICENSE
@@ -0,0 +1,447 @@
1
+ Metadata-Version: 2.4
2
+ Name: mimiry-cli
3
+ Version: 0.1.1
4
+ Summary: Python SDK for the Mimiry GPU Cloud API
5
+ Author: Mimiry
6
+ License-Expression: MIT
7
+ Project-URL: Documentation, https://mimiryprimary.lovable.app
8
+ Project-URL: Source, https://github.com/OTSorensen/mimiry-python-sdk
9
+ Project-URL: Bug Tracker, https://github.com/OTSorensen/mimiry-python-sdk/issues
10
+ Keywords: gpu,cloud,api,sdk,mimiry,machine-learning
11
+ Classifier: Development Status :: 3 - Alpha
12
+ Classifier: Intended Audience :: Developers
13
+ Classifier: Programming Language :: Python :: 3
14
+ Classifier: Programming Language :: Python :: 3.8
15
+ Classifier: Programming Language :: Python :: 3.9
16
+ Classifier: Programming Language :: Python :: 3.10
17
+ Classifier: Programming Language :: Python :: 3.11
18
+ Classifier: Programming Language :: Python :: 3.12
19
+ Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
20
+ Requires-Python: >=3.8
21
+ Description-Content-Type: text/markdown
22
+ License-File: LICENSE
23
+ Requires-Dist: httpx>=0.24.0
24
+ Provides-Extra: cli
25
+ Requires-Dist: typer>=0.9.0; extra == "cli"
26
+ Requires-Dist: rich>=13.0.0; extra == "cli"
27
+ Dynamic: license-file
28
+
29
+ # Mimiry Python SDK
30
+
31
+ Python client for the [Mimiry GPU Cloud API](https://mimiryprimary.lovable.app). Deploy GPU instances, submit batch jobs, and manage cloud resources programmatically.
32
+
33
+ ## Installation
34
+
35
+ ### Prerequisites (Linux/WSL)
36
+
37
+ Modern Linux distributions (Debian 12+, Ubuntu 23.04+, WSL) require a virtual environment due to [PEP 668](https://peps.python.org/pep-0668/):
38
+
39
+ ```bash
40
+ # Create and activate a virtual environment
41
+ python3 -m venv ~/.venvs/mimiry
42
+ source ~/.venvs/mimiry/bin/activate
43
+ ```
44
+
45
+ ### Install from PyPI
46
+
47
+ ```bash
48
+ pip install mimiry[cli]
49
+ ```
50
+
51
+ ### Verify installation
52
+
53
+ ```bash
54
+ mimiry version
55
+ # mimiry 0.1.0
56
+ ```
57
+
58
+ ### Install from source (development)
59
+
60
+ ```bash
61
+ git clone https://github.com/OTSorensen/mimiry-python-sdk
62
+ cd mimiry-python-sdk
63
+ pip install -e ".[cli]"
64
+ ```
65
+
66
+ ## Quick Start
67
+
68
+ ```python
69
+ from mimiry import MimiryClient
70
+
71
+ client = MimiryClient(api_key="mky_your_key_here")
72
+
73
+ # List available GPUs (defaults to Verda)
74
+ currency = "eur"
75
+ symbol = {"eur": "€", "usd": "$"}.get(currency, currency.upper())
76
+ for gpu in client.list_instance_types(currency=currency):
77
+ print(f"{gpu['instance_type']} — {symbol}{gpu['price_per_hour']}/hr")
78
+
79
+ # List Scaleway GPUs
80
+ for gpu in client.list_instance_types(provider="scaleway"):
81
+ print(f"{gpu['instance_type']} — €{gpu['price_per_hour']}/hr")
82
+
83
+ # Submit a job
84
+ job = client.submit_job(
85
+ name="training-run",
86
+ instance_type="1V100.6V",
87
+ image="ubuntu-22.04-cuda-12.0",
88
+ location="FIN-01",
89
+ ssh_key_ids=["your-key-uuid"],
90
+ startup_script="#!/bin/bash\npython train.py",
91
+ auto_shutdown=True,
92
+ )
93
+ print(f"Job {job['id']} submitted — status: {job['status']}")
94
+ ```
95
+
96
+ ## Authentication
97
+
98
+ 1. Create an API key from the [Mimiry Dashboard](https://mimiryprimary.lovable.app) → **API Keys**
99
+ 2. Pass it to the client:
100
+
101
+ ```python
102
+ client = MimiryClient(api_key="mky_your_key_here")
103
+ ```
104
+
105
+ API keys require scopes for the endpoints you want to access:
106
+
107
+ | Scope | Endpoints |
108
+ |-------|-----------|
109
+ | `jobs:read` | List/get jobs |
110
+ | `jobs:write` | Submit/cancel jobs |
111
+ | `instances:read` | List GPUs, locations, availability, images, providers |
112
+ | `ssh_keys:read` | List SSH keys |
113
+ | `ssh_keys:write` | Add/delete SSH keys |
114
+ | `registry:read` | List registry credentials |
115
+ | `registry:write` | Add/delete registry credentials |
116
+
117
+ ## Supported Providers
118
+
119
+ The SDK is provider-agnostic. Pass `provider="..."` to target a specific backend.
120
+
121
+ | Provider | Slug | GPU Types | Locations |
122
+ |----------|------|-----------|-----------|
123
+ | Verda | `verda` (default) | V100, A100, H100, etc. | FIN-01 (Helsinki) |
124
+ | Scaleway | `scaleway` | H100, L4, L40S, B300 | fr-par-2 (Paris), nl-ams-1 (Amsterdam), pl-waw-2 (Warsaw), + more |
125
+
126
+ > **Note:** The legacy slug `datacrunch` is still accepted as an alias for `verda` for backward compatibility.
127
+
128
+ ### Scaleway Instance Types
129
+
130
+ Scaleway GPU types follow the pattern `{GPU}-{count}-{memory}`:
131
+
132
+ | Instance Type | GPU | VRAM | Example |
133
+ |---------------|-----|------|---------|
134
+ | `H100-1-80G` | 1× H100 | 80 GB | Single H100 |
135
+ | `H100-2-80G` | 2× H100 | 160 GB | Dual H100 |
136
+ | `L40S-1-48G` | 1× L40S | 48 GB | Single L40S |
137
+ | `L4-1-24G` | 1× L4 | 24 GB | Single L4 |
138
+
139
+ ### Scaleway Locations
140
+
141
+ | Code | Name |
142
+ |------|------|
143
+ | `fr-par-1` | Paris 1 |
144
+ | `fr-par-2` | Paris 2 (GPU) |
145
+ | `fr-par-3` | Paris 3 |
146
+ | `nl-ams-1` | Amsterdam 1 |
147
+ | `nl-ams-2` | Amsterdam 2 |
148
+ | `pl-waw-2` | Warsaw 2 |
149
+ | `pl-waw-3` | Warsaw 3 |
150
+
151
+ ## API Reference
152
+
153
+ ### Jobs
154
+
155
+ ```python
156
+ # List all jobs
157
+ jobs = client.list_jobs()
158
+
159
+ # Get job details
160
+ job = client.get_job("job-uuid")
161
+
162
+ # Submit a job (Verda — default)
163
+ job = client.submit_job(
164
+ name="my-job",
165
+ instance_type="1V100.6V",
166
+ image="ubuntu-22.04-cuda-12.0",
167
+ location="FIN-01",
168
+ ssh_key_ids=["key-uuid"],
169
+ startup_script="#!/bin/bash\nnvidia-smi",
170
+ auto_shutdown=True,
171
+ heartbeat_timeout_seconds=1800, # optional, default 600
172
+ max_runtime_seconds=7200, # optional, no default
173
+ )
174
+
175
+ # Submit a job on Scaleway
176
+ job = client.submit_job(
177
+ name="scaleway-training",
178
+ instance_type="H100-1-80G",
179
+ image="ubuntu_jammy",
180
+ location="fr-par-2",
181
+ ssh_key_ids=["key-uuid"],
182
+ startup_script="#!/bin/bash\nnvidia-smi",
183
+ provider="scaleway",
184
+ auto_shutdown=True,
185
+ )
186
+
187
+ # Cancel a job
188
+ client.cancel_job("job-uuid")
189
+
190
+ # Wait for a job to finish (polls every 10s, timeout 1h)
191
+ result = client.wait_for_job("job-uuid", poll_interval=10, timeout=3600)
192
+
193
+ # Submit and wait in one call
194
+ result = client.submit_job_and_wait(
195
+ name="my-job",
196
+ instance_type="1V100.6V",
197
+ image="ubuntu-22.04-cuda-12.0",
198
+ location="FIN-01",
199
+ ssh_key_ids=["key-uuid"],
200
+ startup_script="#!/bin/bash\npython train.py",
201
+ )
202
+ ```
203
+
204
+ ### Streaming Logs & Metrics
205
+
206
+ Jobs automatically stream stdout/stderr to the dashboard in real-time (every 15s). You can also emit **structured metrics** (loss, accuracy, etc.) that will appear as interactive charts.
207
+
208
+ #### Emitting Metrics (File Convention)
209
+
210
+ Write JSONL to `/tmp/mimiry_metrics.jsonl` — no SDK dependency required:
211
+
212
+ ```python
213
+ import json
214
+
215
+ # In your training loop:
216
+ for epoch in range(num_epochs):
217
+ loss = train_one_epoch()
218
+ accuracy = evaluate()
219
+
220
+ # Write metrics — they appear as live charts in the dashboard
221
+ with open("/tmp/mimiry_metrics.jsonl", "a") as f:
222
+ f.write(json.dumps({
223
+ "step": epoch,
224
+ "loss": float(loss),
225
+ "accuracy": float(accuracy),
226
+ "learning_rate": optimizer.param_groups[0]["lr"],
227
+ }) + "\n")
228
+ ```
229
+
230
+ **Rules:**
231
+ - Each line must be valid JSON
232
+ - Include a `step` or `epoch` field for the X-axis
233
+ - All numeric fields are automatically plotted
234
+ - The agent watches the file every 10s and streams new entries to the dashboard
235
+ - No SDK import needed — works with any framework (PyTorch, TensorFlow, JAX, etc.)
236
+
237
+ #### PyTorch Example
238
+
239
+ ```python
240
+ import json, torch
241
+
242
+ metrics_file = "/tmp/mimiry_metrics.jsonl"
243
+
244
+ for epoch in range(100):
245
+ model.train()
246
+ total_loss = 0
247
+ for batch in train_loader:
248
+ loss = train_step(model, batch)
249
+ total_loss += loss.item()
250
+
251
+ avg_loss = total_loss / len(train_loader)
252
+ val_acc = evaluate(model, val_loader)
253
+
254
+ with open(metrics_file, "a") as f:
255
+ f.write(json.dumps({
256
+ "step": epoch,
257
+ "train_loss": avg_loss,
258
+ "val_accuracy": val_acc,
259
+ "gpu_memory_mb": torch.cuda.max_memory_allocated() / 1e6,
260
+ }) + "\n")
261
+ ```
262
+
263
+ #### Viewing Logs & Metrics
264
+
265
+ - **Dashboard**: Click any job → **Logs** tab shows streaming output, **Metrics** tab shows interactive charts
266
+ - **API**: Logs and metrics are stored in `job_logs` and `job_metrics` tables, queryable via the Supabase client
267
+
268
+ ### Instance Types
269
+
270
+ ```python
271
+ # List all GPU types with pricing (EUR default, Verda default)
272
+ gpus = client.list_instance_types()
273
+
274
+ # List in USD
275
+ gpus = client.list_instance_types(currency="usd")
276
+
277
+ # List Scaleway GPU types
278
+ gpus = client.list_instance_types(provider="scaleway")
279
+ ```
280
+
281
+ ### Availability
282
+
283
+ ```python
284
+ # Check all availability (Verda)
285
+ available = client.check_availability()
286
+
287
+ # Check specific instance type
288
+ available = client.check_availability(instance_type="1V100.6V")
289
+
290
+ # Check Scaleway availability
291
+ available = client.check_availability(provider="scaleway")
292
+ available = client.check_availability(instance_type="H100-1-80G", provider="scaleway")
293
+ ```
294
+
295
+ ### Locations
296
+
297
+ ```python
298
+ # Verda locations
299
+ locations = client.list_locations()
300
+
301
+ # Scaleway locations
302
+ locations = client.list_locations(provider="scaleway")
303
+ ```
304
+
305
+ ### OS Images
306
+
307
+ ```python
308
+ # All images (Verda)
309
+ images = client.list_images()
310
+
311
+ # Images compatible with a specific GPU type
312
+ images = client.list_images(instance_type="1V100.6V")
313
+
314
+ # Scaleway images
315
+ images = client.list_images(provider="scaleway")
316
+ ```
317
+
318
+ ### Providers
319
+
320
+ ```python
321
+ providers = client.list_providers()
322
+ # Returns: [{"name": "Verda", "slug": "verda"}, {"name": "Scaleway", "slug": "scaleway"}]
323
+ ```
324
+
325
+ ### SSH Keys
326
+
327
+ SSH keys are synced to all active providers automatically when created via the API.
328
+
329
+ ```python
330
+ # List keys
331
+ keys = client.list_ssh_keys()
332
+
333
+ # Add a key (synced to Verda + Scaleway)
334
+ key = client.add_ssh_key("my-laptop", open("~/.ssh/id_ed25519.pub").read())
335
+
336
+ # Delete a key (removed from all providers)
337
+ client.delete_ssh_key("key-uuid")
338
+ ```
339
+
340
+ ### Container Registry Credentials
341
+
342
+ Store credentials for private container registries (Docker Hub, AWS ECR, GHCR, etc.). When you submit a job with a `container_image` and `registry_credential_id`, the platform automatically runs `docker login` + `docker pull` before your startup script.
343
+
344
+ ```python
345
+ # List saved credentials
346
+ creds = client.list_registry_credentials()
347
+
348
+ # Add a generic credential (Docker Hub, GHCR, etc.)
349
+ cred = client.add_registry_credential(
350
+ name="Docker Hub",
351
+ registry_url="docker.io",
352
+ username="myuser",
353
+ password="dckr_pat_xxxxxxxxxxxx",
354
+ is_default=True,
355
+ )
356
+
357
+ # Add an AWS ECR credential
358
+ # Your IAM credentials are stored securely. At job dispatch time, the platform
359
+ # exchanges them for a short-lived ECR token (valid 12h) server-side.
360
+ # Your AWS credentials never touch the compute node.
361
+ ecr_cred = client.add_registry_credential(
362
+ name="My ECR",
363
+ registry_url="123456789.dkr.ecr.eu-west-1.amazonaws.com",
364
+ username="AKIAIOSFODNN7EXAMPLE", # AWS Access Key ID
365
+ password="wJalrXUtnFEMI/K7MDENG/bPx", # AWS Secret Access Key
366
+ registry_type="aws_ecr",
367
+ aws_region="eu-west-1",
368
+ )
369
+
370
+ # Delete a credential
371
+ client.delete_registry_credential("credential-uuid")
372
+
373
+ # Submit a job with a private container image
374
+ job = client.submit_job(
375
+ name="bio-pipeline",
376
+ instance_type="1V100.6V",
377
+ image="ubuntu-22.04-cuda-12.0",
378
+ location="FIN-01",
379
+ ssh_key_ids=["key-uuid"],
380
+ container_image="ghcr.io/myorg/pipeline:v2",
381
+ registry_credential_id=cred["id"],
382
+ startup_script="docker run ghcr.io/myorg/pipeline:v2 --data /mnt/data",
383
+ auto_shutdown=True,
384
+ )
385
+ ```
386
+
387
+ ## CLI
388
+
389
+ For terminal usage, see the [CLI Guide](/cli-guide).
390
+
391
+ ## Error Handling
392
+
393
+ The SDK raises typed exceptions for API errors:
394
+
395
+ ```python
396
+ from mimiry import MimiryClient, AuthenticationError, InsufficientCreditsError
397
+
398
+ client = MimiryClient(api_key="mky_...")
399
+
400
+ try:
401
+ job = client.submit_job(...)
402
+ except AuthenticationError:
403
+ print("Invalid API key")
404
+ except InsufficientCreditsError:
405
+ print("Not enough credits — top up at the dashboard")
406
+ except MimiryError as e:
407
+ print(f"API error [{e.status_code}]: {e.message}")
408
+ ```
409
+
410
+ | Exception | HTTP Status | Meaning |
411
+ |-----------|-------------|---------|
412
+ | `AuthenticationError` | 401 | Invalid or missing API key |
413
+ | `InsufficientCreditsError` | 402 | Not enough credits |
414
+ | `InsufficientScopeError` | 403 | API key lacks required scope |
415
+ | `NotFoundError` | 404 | Resource not found |
416
+ | `RateLimitError` | 429 | Too many requests |
417
+ | `ServerError` | 5xx | Server-side error |
418
+ | `MimiryError` | other | Catch-all base exception |
419
+
420
+ ## Context Manager
421
+
422
+ The client can be used as a context manager to ensure connections are closed:
423
+
424
+ ```python
425
+ with MimiryClient(api_key="mky_...") as client:
426
+ jobs = client.list_jobs()
427
+ ```
428
+
429
+ ## Configuration
430
+
431
+ ```python
432
+ client = MimiryClient(
433
+ api_key="mky_...",
434
+ base_url="https://custom-endpoint.example.com", # override API URL
435
+ timeout=60.0, # request timeout in seconds (default 30)
436
+ max_retries=5, # retry count for transient failures (default 3)
437
+ )
438
+ ```
439
+
440
+ ## Requirements
441
+
442
+ - Python ≥ 3.8
443
+ - `httpx` ≥ 0.24.0
444
+
445
+ ## License
446
+
447
+ MIT