wafer-cli 0.2.9__py3-none-any.whl → 0.2.11__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
wafer/GUIDE.md CHANGED
@@ -8,12 +8,17 @@ Run code on cloud GPUs instantly with workspaces:
8
8
 
9
9
  ```bash
10
10
  wafer login # One-time auth
11
- wafer workspaces create dev --gpu B200 # Create workspace
11
+ wafer workspaces create dev --gpu B200 # Create workspace (NVIDIA B200)
12
12
  wafer workspaces exec dev -- python -c "import torch; print(torch.cuda.get_device_name(0))"
13
13
  wafer workspaces sync dev ./my-project # Sync files
14
14
  wafer workspaces exec dev -- python train.py
15
15
  ```
16
16
 
17
+ **Available GPUs:**
18
+
19
+ - `MI300X` - AMD Instinct MI300X (192GB HBM3, ROCm)
20
+ - `B200` - NVIDIA Blackwell B200 (180GB HBM3e, CUDA) - default
21
+
17
22
  ## Documentation Lookup
18
23
 
19
24
  Answer GPU programming questions from indexed documentation.
@@ -83,13 +88,19 @@ wafer agent -t optimize-kernel \
83
88
 
84
89
  Cloud GPU environments with no setup required.
85
90
 
91
+ **Available GPUs:**
92
+
93
+ - `MI300X` - AMD Instinct MI300X (192GB HBM3, ROCm)
94
+ - `B200` - NVIDIA Blackwell B200 (180GB HBM3e, CUDA) - default
95
+
86
96
  ```bash
87
- wafer workspaces create dev --gpu B200 # Create
88
- wafer workspaces list # List all
89
- wafer workspaces sync dev ./project # Sync files
90
- wafer workspaces exec dev -- ./run.sh # Run commands
91
- wafer workspaces ssh dev # Interactive SSH
92
- wafer workspaces delete dev # Cleanup
97
+ wafer workspaces create dev --gpu B200 --wait # NVIDIA B200
98
+ wafer workspaces create amd-dev --gpu MI300X # AMD MI300X
99
+ wafer workspaces list # List all
100
+ wafer workspaces sync dev ./project # Sync files
101
+ wafer workspaces exec dev -- ./run.sh # Run commands
102
+ wafer workspaces ssh dev # Interactive SSH
103
+ wafer workspaces delete dev # Cleanup
93
104
  ```
94
105
 
95
106
  See `wafer workspaces --help` for details.
wafer/api_client.py CHANGED
@@ -108,6 +108,7 @@ def run_command_stream(
108
108
  upload_dir: Path | None = None,
109
109
  workspace_id: str | None = None,
110
110
  gpu_id: int | None = None,
111
+ gpu_count: int = 1,
111
112
  docker_image: str | None = None,
112
113
  docker_entrypoint: str | None = None,
113
114
  pull_image: bool = False,
@@ -125,6 +126,7 @@ def run_command_stream(
125
126
  upload_dir: Directory to upload (stateless mode)
126
127
  workspace_id: Workspace ID from push (low-level mode)
127
128
  gpu_id: GPU ID to use (optional)
129
+ gpu_count: Number of GPUs needed (1-8, default 1)
128
130
  docker_image: Docker image override (optional)
129
131
  docker_entrypoint: Docker entrypoint override (optional, e.g., "bash")
130
132
  pull_image: Pull image if not available (optional, default False)
@@ -152,6 +154,8 @@ def run_command_stream(
152
154
 
153
155
  if gpu_id is not None:
154
156
  request_body["gpu_id"] = gpu_id
157
+ if gpu_count > 1:
158
+ request_body["gpu_count"] = gpu_count
155
159
  if docker_image is not None:
156
160
  request_body["docker_image"] = docker_image
157
161
  if docker_entrypoint is not None: