xpk 0.7.2__py3-none-any.whl → 0.9.0__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (46) hide show
  1. xpk/commands/batch.py +19 -13
  2. xpk/commands/cluster.py +240 -71
  3. xpk/commands/cluster_gcluster.py +22 -5
  4. xpk/commands/common.py +33 -1
  5. xpk/commands/info.py +2 -4
  6. xpk/commands/job.py +7 -8
  7. xpk/commands/kjob_common.py +30 -18
  8. xpk/commands/run.py +17 -12
  9. xpk/commands/shell.py +3 -4
  10. xpk/commands/storage.py +75 -19
  11. xpk/commands/workload.py +161 -324
  12. xpk/core/blueprint/blueprint_definitions.py +2 -0
  13. xpk/core/blueprint/blueprint_generator.py +335 -45
  14. xpk/core/capacity.py +1 -0
  15. xpk/core/cluster.py +193 -12
  16. xpk/core/config.py +3 -1
  17. xpk/core/docker_manager.py +1 -1
  18. xpk/core/docker_resources.py +9 -21
  19. xpk/core/filestore.py +5 -1
  20. xpk/core/gcsfuse.py +27 -6
  21. xpk/core/kjob.py +66 -20
  22. xpk/core/kueue.py +30 -0
  23. xpk/core/mtc.py +195 -0
  24. xpk/core/nap.py +4 -0
  25. xpk/core/network.py +34 -22
  26. xpk/core/nodepool.py +28 -26
  27. xpk/core/pathways.py +165 -210
  28. xpk/core/resources.py +21 -0
  29. xpk/core/scheduling.py +36 -0
  30. xpk/core/storage.py +66 -12
  31. xpk/core/system_characteristics.py +9 -0
  32. xpk/core/workload.py +28 -83
  33. xpk/core/workload_decorators/rdma_decorator.py +11 -15
  34. xpk/core/workload_decorators/storage_decorator.py +8 -3
  35. xpk/core/workload_decorators/tcpx_decorator.py +179 -0
  36. xpk/core/workload_decorators/tcpxo_decorator.py +17 -16
  37. xpk/parser/cluster.py +574 -381
  38. xpk/parser/storage.py +25 -5
  39. xpk/parser/workload.py +59 -31
  40. xpk/utils/kubectl.py +4 -1
  41. {xpk-0.7.2.dist-info → xpk-0.9.0.dist-info}/METADATA +192 -93
  42. {xpk-0.7.2.dist-info → xpk-0.9.0.dist-info}/RECORD +46 -44
  43. {xpk-0.7.2.dist-info → xpk-0.9.0.dist-info}/WHEEL +1 -1
  44. {xpk-0.7.2.dist-info → xpk-0.9.0.dist-info}/entry_points.txt +0 -0
  45. {xpk-0.7.2.dist-info → xpk-0.9.0.dist-info}/licenses/LICENSE +0 -0
  46. {xpk-0.7.2.dist-info → xpk-0.9.0.dist-info}/top_level.txt +0 -0
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: xpk
3
- Version: 0.7.2
3
+ Version: 0.9.0
4
4
  Summary: xpk helps Cloud developers to orchestrate training jobs on accelerators on GKE.
5
5
  Author-email: XPK team <xpk-code-reviewers@google.com>
6
6
  License: Apache-2.0
@@ -46,23 +46,18 @@ Dynamic: license-file
46
46
  limitations under the License.
47
47
  -->
48
48
 
49
- [![Build Tests](https://github.com/google/xpk/actions/workflows/build_tests.yaml/badge.svg)](https://github.com/google/xpk/actions/workflows/build_tests.yaml)
50
- [![Nightly Tests](https://github.com/google/xpk/actions/workflows/nightly_tests.yaml/badge.svg)](https://github.com/google/xpk/actions/workflows/nightly_tests.yaml)
51
- [![Develop Tests](https://github.com/AI-Hypercomputer/xpk/actions/workflows/build_tests.yaml/badge.svg?branch=develop)](https://github.com/AI-Hypercomputer/xpk/actions/workflows/build_tests.yaml)
52
- [![Develop Nightly Tests](https://github.com/AI-Hypercomputer/xpk/actions/workflows/nightly_tests.yaml/badge.svg?branch=develop)](https://github.com/AI-Hypercomputer/xpk/actions/workflows/nightly_tests.yaml)
49
+ [![Build Tests](https://github.com/google/xpk/actions/workflows/build_tests.yaml/badge.svg?query=branch%3Amain)](https://github.com/google/xpk/actions/workflows/build_tests.yaml?query=branch%3Amain)
50
+ [![Nightly Tests](https://github.com/google/xpk/actions/workflows/nightly_tests.yaml/badge.svg?query=branch%3Amain)](https://github.com/google/xpk/actions/workflows/nightly_tests.yaml?query=branch%3Amain)
51
+ [![Develop Tests](https://github.com/AI-Hypercomputer/xpk/actions/workflows/build_tests.yaml/badge.svg?query=branch%3Adevelop)](https://github.com/AI-Hypercomputer/xpk/actions/workflows/build_tests.yaml?query=branch%3Adevelop)
52
+ [![Develop Nightly Tests](https://github.com/AI-Hypercomputer/xpk/actions/workflows/nightly_tests.yaml/badge.svg?query=branch%3Adevelop)](https://github.com/AI-Hypercomputer/xpk/actions/workflows/nightly_tests.yaml?query=branch%3Adevelop)
53
53
 
54
54
  # Overview
55
55
 
56
- xpk (Accelerated Processing Kit, pronounced x-p-k,) is a software tool to help
57
- Cloud developers to orchestrate training jobs on accelerators such as TPUs and
58
- GPUs on GKE. xpk handles the "multihost pods" of TPUs, GPUs (HGX H100) and CPUs
59
- (n2-standard-32) as first class citizens.
56
+ XPK (Accelerated Processing Kit, pronounced x-p-k) is a command line interface that simplifies cluster creation and workload execution on Google Kubernetes Engine (GKE). XPK generates preconfigured, training-optimized clusters and allows easy workload scheduling without any Kubernetes expertise.
60
57
 
61
- xpk decouples provisioning capacity from running jobs. There are two structures:
62
- clusters (provisioned VMs) and workloads (training jobs). Clusters represent the
63
- physical resources you have available. Workloads represent training jobs -- at
64
- any time some of these will be completed, others will be running and some will
65
- be queued, waiting for cluster resources to become available.
58
+ XPK is recommended for quick creation of GKE clusters for proofs of concepts and testing.
59
+
60
+ XPK decouples provisioning capacity from running jobs. There are two structures: clusters (provisioned VMs) and workloads (training jobs). Clusters represent the physical resources you have available. Workloads represent training jobs -- at any time some of these will be completed, others will be running and some will be queued, waiting for cluster resources to become available.
66
61
 
67
62
  The ideal workflow starts by provisioning the clusters for all of the ML
68
63
  hardware you have reserved. Then, without re-provisioning, submit jobs as
@@ -73,7 +68,7 @@ return the hardware back to the shared pool when they complete, developers can
73
68
  achieve better use of finite hardware resources. And automated tests can run
74
69
  overnight while resources tend to be underutilized.
75
70
 
76
- xpk supports the following TPU types:
71
+ XPK supports the following TPU types:
77
72
  * v4
78
73
  * v5e
79
74
  * v5p
@@ -82,15 +77,18 @@ xpk supports the following TPU types:
82
77
  and the following GPU types:
83
78
  * A100
84
79
  * A3-Highgpu (h100)
85
- * A3-Mega (h100-mega) - [Create cluster](#provisioning-a3-ultra-and-a3-mega-clusters-gpu-machines), [Create workloads](#workloads-for-a3-ultra-and-a3-mega-clusters-gpu-machines)
86
- * A3-Ultra (h200) - [Create cluster](#provisioning-a3-ultra-and-a3-mega-clusters-gpu-machines), [Create workloads](#workloads-for-a3-ultra-and-a3-mega-clusters-gpu-machines)
80
+ * A3-Mega (h100-mega) - [Create cluster](#provisioning-a3-ultra-a3-mega-and-a4-clusters-gpu-machines), [Create workloads](#workloads-for-a3-ultra-a3-mega-and-a4-clusters-gpu-machines)
81
+ * A3-Ultra (h200) - [Create cluster](#provisioning-a3-ultra-a3-mega-and-a4-clusters-gpu-machines), [Create workloads](#workloads-for-a3-ultra-a3-mega-and-a4-clusters-gpu-machines)
82
+ * A4 (b200) - [Create cluster](#provisioning-a3-ultra-a3-mega-and-a4-clusters-gpu-machines), [Create workloads](#workloads-for-a3-ultra-a3-mega-and-a4-clusters-gpu-machines)
87
83
 
88
84
  and the following CPU types:
89
85
  * n2-standard-32
90
86
 
91
- xpk also supports Google Cloud Storage solutions:
87
+ XPK also supports [Google Cloud Storage solutions](#storage):
92
88
  * [Cloud Storage FUSE](#fuse)
93
89
  * [Filestore](#filestore)
90
+ * [Parallelstore](#parallelstore)
91
+ * [Block storage (Persistent Disk, Hyperdisk)](#block-storage-persistent-disk-hyperdisk)
94
92
 
95
93
  # Permissions needed on Cloud Console:
96
94
 
@@ -104,77 +102,93 @@ xpk also supports Google Cloud Storage solutions:
104
102
  * Vertex AI Administrator
105
103
  * Filestore Editor (This role is neccessary if you want to run `storage create` command with `--type=gcpfilestore`)
106
104
 
107
- # Prerequisites
105
+ # Installation
106
+
107
+ There are 2 ways to install XPK:
108
+
109
+ - via Python package installer (`pip`),
110
+ - clone from git and build from source.
111
+
112
+ ## Prerequisites
108
113
 
109
- Following tools must be installed:
114
+ The following tools must be installed:
110
115
 
111
- - python >= 3.10 (download from [here](https://www.python.org/downloads/))
112
- - pip ([installation instruction](https://pip.pypa.io/en/stable/installation/))
113
- - python venv ([installation instruction](https://virtualenv.pypa.io/en/latest/installation.html))
116
+ - python >= 3.10: download from [here](https://www.python.org/downloads/)
117
+ - pip: [installation instructions](https://pip.pypa.io/en/stable/installation/)
118
+ - python venv: [installation instructions](https://virtualenv.pypa.io/en/latest/installation.html)
114
119
  (all three of above can be installed at once from [here](https://packaging.python.org/en/latest/guides/installing-using-linux-tools/#installing-pip-setuptools-wheel-with-linux-package-managers))
115
- - gcloud (install from [here](https://cloud.google.com/sdk/gcloud#download_and_install_the))
120
+ - gcloud: install from [here](https://cloud.google.com/sdk/gcloud#download_and_install_the) and then:
116
121
  - Run `gcloud init`
117
122
  - [Authenticate](https://cloud.google.com/sdk/gcloud/reference/auth/application-default/login) to Google Cloud
118
- - kubectl (install from [here](https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl#install_kubectl))
123
+ - kubectl: install from [here](https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl#install_kubectl) and then:
119
124
  - Install `gke-gcloud-auth-plugin` from [here](https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl#install_plugin)
120
- - docker ([installation instruction](https://docs.docker.com/engine/install/))
125
+ - docker: [installation instructions](https://docs.docker.com/engine/install/) and then:
126
+ - Configure sudoless docker: [guide](https://docs.docker.com/engine/install/linux-postinstall/)
121
127
  - Run `gcloud auth configure-docker` to ensure images can be uploaded to registry
122
- - make - please run below command.
123
- ```shell
124
- # sudo may be required
125
- apt-get -y install make
126
- ```
127
- In addition, below dependencies can be installed either using provided links or using `make install` command, if xpk is downloaded via `git clone` command:
128
- - kueuectl (install from [here](https://kueue.sigs.k8s.io/docs/reference/kubectl-kueue/installation/))
129
- - kjob (installation instructions [here](https://github.com/kubernetes-sigs/kjob/blob/main/docs/installation.md))
130
128
 
131
- # Installation
132
- To install xpk, install required tools mentioned in [prerequisites](#prerequisites). [Makefile](https://github.com/AI-Hypercomputer/xpk/blob/main/Makefile) provides a way to install all neccessary tools. XPK can be installed via pip:
129
+ ### Additional prerequisites when installing from pip
130
+
131
+ - kueuectl: install from [here](https://kueue.sigs.k8s.io/docs/reference/kubectl-kueue/installation/)
132
+ - kjob: installation instructions [here](https://github.com/kubernetes-sigs/kjob/blob/main/docs/installation.md)
133
+
134
+ ### Additional prerequisites when installing from source
135
+
136
+ - git: [installation instructions](https://git-scm.com/downloads/linux)
137
+ - make: install by running `apt-get -y install make` (`sudo` might be required)
138
+
139
+ ## Installation via pip
140
+
141
+ To install XPK using pip, first install required tools mentioned in [prerequisites](#prerequisites) and [additional prerequisites](#additional-prerequisites-when-installing-from-pip). Then you can install XPK simply by running:
133
142
 
134
143
  ```shell
135
144
  pip install xpk
136
145
  ```
137
146
 
138
- If you see an error saying: `This environment is externally managed`, please use a virtual environment.
147
+ If you see an error saying: `This environment is externally managed`, please use a virtual environment. For example:
139
148
 
140
149
  ```shell
141
- ## One time step of creating the venv
142
- VENV_DIR=~/venvp3
143
- python3 -m venv $VENV_DIR
144
- ## Enter your venv.
145
- source $VENV_DIR/bin/activate
146
- ## Clone the repository and installing dependencies.
147
- pip install xpk
150
+ # One time step of creating the virtual environment
151
+ VENV_DIR=~/venvp3
152
+ python3 -m venv $VENV_DIR
153
+
154
+ # Activate your virtual environment
155
+ source $VENV_DIR/bin/activate
156
+
157
+ # Install XPK in virtual environment using pip
158
+ pip install xpk
148
159
  ```
149
160
 
150
- If you are running XPK by cloning GitHub repository, first run the
151
- following commands to begin using XPK commands:
161
+ ## Installation from source
162
+
163
+ To install XPK from source, first install required tools mentioned in [prerequisites](#prerequisites) and [additional prerequisites](#additional-prerequisites-when-installing-from-source). Afterwards you can install XPK from source using `make`
152
164
 
153
165
  ```shell
166
+ # Clone the XPK repository
154
167
  git clone https://github.com/google/xpk.git
155
168
  cd xpk
156
- # Install required dependencies with make
169
+
170
+ # Install required dependencies and build XPK with make
157
171
  make install && export PATH=$PATH:$PWD/bin
158
172
  ```
159
173
 
160
- If you want to have installed dependecies persist in your PATH please run:
161
- `echo $PWD/bin` and add its value to `PATH` in .bashrc or .zshrc
174
+ If you want the dependecies to be available in your PATH please run: `echo $PWD/bin` and add its value to `PATH` in .bashrc or .zshrc file.
162
175
 
163
- If you see an error saying: `This environment is externally managed`, please use a virtual environment.
164
-
165
- Example:
176
+ If you see an error saying: `This environment is externally managed`, please use a virtual environment. For example:
166
177
 
167
178
  ```shell
168
- ## One time step of creating the venv
169
- VENV_DIR=~/venvp3
170
- python3 -m venv $VENV_DIR
171
- ## Enter your venv.
172
- source $VENV_DIR/bin/activate
173
- ## Clone the repository and installing dependencies.
174
- git clone https://github.com/google/xpk.git
175
- cd xpk
176
- # Install required dependencies with make
177
- make install && export PATH=$PATH:$PWD/bin
179
+ # One time step of creating the virtual environment
180
+ VENV_DIR=~/venvp3
181
+ python3 -m venv $VENV_DIR
182
+
183
+ # Activate your virtual environment
184
+ source $VENV_DIR/bin/activate
185
+
186
+ # Clone the XPK repository
187
+ git clone https://github.com/google/xpk.git
188
+ cd xpk
189
+
190
+ # Install required dependencies and build XPK with make
191
+ make install && export PATH=$PATH:$PWD/bin
178
192
  ```
179
193
 
180
194
  # XPK for Large Scale (>1k VMs)
@@ -253,6 +267,7 @@ all zones.
253
267
  --num-slices=4 --on-demand \
254
268
  --tpu-type=v5litepod-16
255
269
  ```
270
+ Note that Pathways clusters need a CPU nodepool of n2-standard-64 or higher.
256
271
 
257
272
  * Cluster Create for Ray:
258
273
  A cluster with KubeRay enabled and a RayCluster can be created using `cluster create-ray`.
@@ -454,28 +469,55 @@ will fail the cluster creation process because Vertex AI Tensorboard is not supp
454
469
  --tpu-type=v5litepod-16
455
470
  ```
456
471
 
457
- ## Provisioning A3-Ultra and A3-Mega clusters (GPU machines)
458
- To create a cluster with A3 machines, run the below command. To create workloads on these clusters see [here](#workloads-for-a3-ultra-and-a3-mega-clusters-gpu-machines).
459
- * For A3-Ultra: --device-type=h200-141gb-8
460
- * For A3-Mega: --device-type=h100-mega-80gb-8
472
+ ## Provisioning A3 Ultra, A3 Mega and A4 clusters (GPU machines)
473
+ To create a cluster with A3 or A4 machines, run the command below with selected device type. To create workloads on these clusters see [here](#workloads-for-a3-ultra-a3-mega-and-a4-clusters-gpu-machines).
461
474
 
462
- ```shell
463
- python3 xpk.py cluster create \
464
- --cluster CLUSTER_NAME --device-type=h200-141gb-8 \
475
+ **Note:** Creating A3 Ultra, A3 Mega and A4 clusters is currently supported **only** on linux/amd64 architecture.
476
+
477
+ Machine | Device type
478
+ :- | :-
479
+ A3 Mega | `h100-mega-80gb-8`
480
+ A3 Ultra | `h200-141gb-8`
481
+ A4 | `b200-8`
482
+
483
+
484
+ ```shell
485
+ python3 xpk.py cluster create \
486
+ --cluster CLUSTER_NAME --device-type DEVICE_TYPE \
465
487
  --zone=$COMPUTE_ZONE --project=$PROJECT_ID \
466
- --num-nodes=4 --reservation=$RESERVATION_ID
467
- ```
468
- Currently, the below flags/arguments are supported for A3-Mega and A3-Ultra machines:
469
- * --num-nodes
470
- * --default-pool-cpu-machine-type
471
- * --default-pool-cpu-num-nodes
472
- * --reservation
473
- * --spot
474
- * --on-demand (only A3-Mega)
488
+ --num-nodes=$NUM_NODES --reservation=$RESERVATION_ID
489
+ ```
490
+
491
+ Currently, the below flags/arguments are supported for A3 Mega, A3 Ultra and A4 machines:
492
+ * `--num-nodes`
493
+ * `--default-pool-cpu-machine-type`
494
+ * `--default-pool-cpu-num-nodes`
495
+ * `--reservation`
496
+ * `--spot`
497
+ * `--on-demand` (A3 Mega only)
475
498
 
499
+ ## Running XPK on existing clusters
500
+
501
+ In order to run XPK commands on a cluster it needs to be set up correctly. This is done automatically when creating a cluster using `xpk cluster create`. For clusters created differently (e.g.: with 'gcloud' or a Cluster Toolkit blueprint) there is a dedicated command: `xpk cluster adapt`. This command installs required config maps, kueue, jobset, CSI drivers etc.
502
+
503
+ Currently `xpk cluster adapt` supports only the following device types:
504
+
505
+ - `h200-141gb-8` (A3 Ultra)
506
+
507
+ Example usage:
508
+ ```shell
509
+ python3 xpk.py cluster adapt \
510
+ --cluster=$CLUSTER_NAME --device-type=$DEVICE_TYPE \
511
+ --zone=$COMPUTE_ZONE --project=$PROJECT_ID \
512
+ --num-nodes=$NUM_NODES --reservation=$RESERVATION_ID
513
+ ```
476
514
 
477
515
  ## Storage
478
- Currently XPK supports two types of storages: Cloud Storage FUSE and Google Cloud Filestore.
516
+ Currently XPK supports the below types of storages:
517
+ - [Cloud Storage FUSE](#fuse)
518
+ - [Google Cloud Filestore](#filestore)
519
+ - [Google Cloud Parallelstore](#parallelstore)
520
+ - [Google Cloud Block storages (Persistent Disk, Hyperdisk)](#block-storage-persistent-disk-hyperdisk)
479
521
 
480
522
  ### FUSE
481
523
  A FUSE adapter lets you mount and access Cloud Storage buckets as local file systems, so applications can read and write objects in your bucket using standard file system semantics.
@@ -499,11 +541,13 @@ Parameters:
499
541
  - `--readonly` - if set to true, workload can only read from storage.
500
542
  - `--size` - size of the storage in Gb.
501
543
  - `--bucket` - name of the storage bucket. If not set then the name of the storage is used as a bucket name.
544
+ - `--mount-options` - comma-separated list of additional mount options for PersistentVolume ([reference](https://cloud.google.com/kubernetes-engine/docs/how-to/cloud-storage-fuse-csi-driver-perf#mount-options)).
545
+ - `--prefetch-metadata` - enables metadata pre-population when mounting the volume by setting parameter `gcsfuseMetadataPrefetchOnMount` to `true` ([reference](https://cloud.google.com/kubernetes-engine/docs/how-to/cloud-storage-fuse-csi-driver-perf#metadata-prefetch)).
502
546
  - `--manifest` - path to the manifest file containing PersistentVolume and PresistentVolumeClaim definitions. If set, then values from manifest override the following parameters: `--size` and `--bucket`.
503
547
 
504
548
  ### Filestore
505
549
 
506
- A Filestore adapter lets you mount and access [Filestore instances](https://cloud.google.com/filestore/) as local file systems, so applications can read and write objects in your volumes using standard file system semantics.
550
+ A Filestore adapter lets you mount and access [Filestore instances](https://cloud.google.com/filestore/) as local file systems, so applications can read and write files in your volumes using standard file system semantics.
507
551
 
508
552
  To create and attach a GCP Filestore instance to your cluster use `xpk storage create` command with `--type=gcpfilestore`:
509
553
 
@@ -537,6 +581,54 @@ Commands `xpk storage create` and `xpk storage attach` with `--type=gcpfilestore
537
581
  - `--instance` - the name of the Filestore instance. If not set then the name parameter is used as an instance name. Useful when connecting multiple volumes from the same Filestore instance.
538
582
  - `--manifest` - path to the manifest file containing PersistentVolume, PresistentVolumeClaim and StorageClass definitions. If set, then values from manifest override the following parameters: `--access-mode`, `--size` and `--volume`.
539
583
 
584
+ ### Parallelstore
585
+
586
+ A Parallelstore adapter lets you mount and access [Parallelstore instances](https://cloud.google.com/parallelstore/) as local file systems, so applications can read and write files in your volumes using standard file system semantics.
587
+
588
+ To use the GCS Parallelstore with XPK you need to create a [Parallelstore Instance](https://console.cloud.google.com/parallelstore/).
589
+
590
+ Once it's ready you can use `xpk storage attach` with `--type=parallelstore` command to attach a Parallelstore instance to your cluster. Currently, attaching a Parallelstore is supported only by providing a manifest file.
591
+
592
+ ```shell
593
+ python3 xpk.py storage attach test-parallelstore-storage --type=parallelstore \
594
+ --project=$PROJECT --cluster=$CLUSTER --zone=$ZONE \
595
+ --mount-point='/test-mount-point' --readonly=false \
596
+ --auto-mount=true \
597
+ --manifest='./examples/storage/parallelstore-manifest-attach.yaml'
598
+ ```
599
+
600
+ Parameters:
601
+
602
+ - `--type` - type of the storage `parallelstore`
603
+ - `--auto-mount` - if set to true all workloads will have this storage mounted by default.
604
+ - `--mount-point` - the path on which this storage should be mounted for a workload.
605
+ - `--readonly` - if set to true, workload can only read from storage.
606
+ - `--manifest` - path to the manifest file containing PersistentVolume and PresistentVolumeClaim definitions.
607
+
608
+ ### Block storage (Persistent Disk, Hyperdisk)
609
+
610
+ A PersistentDisk adapter lets you mount and access Google Cloud Block storage solutions ([Persistent Disk](https://cloud.google.com/kubernetes-engine/docs/concepts/storage-overview#pd), [Hyperdisk](https://cloud.google.com/kubernetes-engine/docs/concepts/storage-overview#hyperdisk)) as local file systems, so applications can read and write files in your volumes using standard file system semantics.
611
+
612
+ To use the GCE PersistentDisk with XPK you need to create a [disk in GCE](https://cloud.google.com/compute/docs/disks). Please consider that the disk type you are creating is [compatible with the VMs](https://cloud.google.com/compute/docs/machine-resource#machine_type_comparison) in the default and accelerator nodepools.
613
+
614
+ Once it's ready you can use `xpk storage attach` with `--type=pd` command to attach a PersistentDisk instance to your cluster. Currently, attaching a PersistentDisk is supported only by providing a manifest file.
615
+
616
+ ```shell
617
+ python3 xpk.py storage attach test-pd-storage --type=pd \
618
+ --project=$PROJECT --cluster=$CLUSTER --zone=$ZONE \
619
+ --mount-point='/test-mount-point' --readonly=false \
620
+ --auto-mount=true \
621
+ --manifest='./examples/storage/pd-manifest-attach.yaml'
622
+ ```
623
+
624
+ Parameters:
625
+
626
+ - `--type` - type of the storage `pd`
627
+ - `--auto-mount` - if set to true all workloads will have this storage mounted by default.
628
+ - `--mount-point` - the path on which this storage should be mounted for a workload.
629
+ - `--readonly` - if set to true, workload can only read from storage.
630
+ - `--manifest` - path to the manifest file containing PersistentVolume and PresistentVolumeClaim definitions.
631
+
540
632
  ### List attached storages
541
633
 
542
634
  ```shell
@@ -593,7 +685,7 @@ python3 xpk.py storage delete test-fs-instance \
593
685
  --cluster xpk-pw-test \
594
686
  --docker-name='user-workload' \
595
687
  --docker-image=<maxtext docker image> \
596
- --command='python3 MaxText/train.py MaxText/configs/base.yml base_output_directory=<output directory> dataset_path=<dataset path> per_device_batch_size=1 enable_checkpointing=false enable_profiler=false remat_policy=full global_parameter_scale=4 steps=300 max_target_length=2048 use_iota_embed=true reuse_example_batch=1 dataset_type=synthetic attention=flash gcs_metrics=True run_name=$(USER)-pw-xpk-test-1'
688
+ --command='python3 -m MaxText.train MaxText/configs/base.yml base_output_directory=<output directory> dataset_path=<dataset path> per_device_batch_size=1 enable_checkpointing=false enable_profiler=false remat_policy=full global_parameter_scale=4 steps=300 max_target_length=2048 use_iota_embed=true reuse_example_batch=1 dataset_type=synthetic attention=flash gcs_metrics=True run_name=$(USER)-pw-xpk-test-1 enable_single_controller=True'
597
689
  ```
598
690
 
599
691
  Regular workload can also be submitted on a Pathways enabled cluster (created with `cluster create-pathways`)
@@ -607,7 +699,7 @@ python3 xpk.py storage delete test-fs-instance \
607
699
  --cluster xpk-pw-test \
608
700
  --docker-name='user-workload' \
609
701
  --docker-image=<maxtext docker image> \
610
- --command='python3 MaxText/train.py MaxText/configs/base.yml base_output_directory=<output directory> dataset_path=<dataset path> per_device_batch_size=1 enable_checkpointing=false enable_profiler=false remat_policy=full global_parameter_scale=4 steps=300 max_target_length=2048 use_iota_embed=true reuse_example_batch=1 dataset_type=synthetic attention=flash gcs_metrics=True run_name=$(USER)-pw-xpk-test-1'
702
+ --command='python3 -m MaxText.train MaxText/configs/base.yml base_output_directory=<output directory> dataset_path=<dataset path> per_device_batch_size=1 enable_checkpointing=false enable_profiler=false remat_policy=full global_parameter_scale=4 steps=300 max_target_length=2048 use_iota_embed=true reuse_example_batch=1 dataset_type=synthetic attention=flash gcs_metrics=True run_name=$(USER)-pw-xpk-test-1'
611
703
  ```
612
704
 
613
705
  Pathways in headless mode - Pathways now offers the capability to run JAX workloads in Vertex AI notebooks or in GCE VMs!
@@ -637,21 +729,27 @@ increase this to a large number, say 50. Real jobs can be interrupted due to
637
729
  hardware failures and software updates. We assume your job has implemented
638
730
  checkpointing so the job restarts near where it was interrupted.
639
731
 
640
- ### Workloads for A3-Ultra and A3-Mega clusters (GPU machines)
641
- To submit jobs on a cluster with A3 machines, run the below command. To create a cluster with A3 machines see [here](#provisioning-a3-ultra-and-a3-mega-clusters-gpu-machines).
642
- * For A3-Ultra: --device-type=h200-141gb-8
643
- * For A3-Mega: --device-type=h100-mega-80gb-8
732
+ ### Workloads for A3 Ultra, A3 Mega and A4 clusters (GPU machines)
733
+ To submit jobs on a cluster with A3 or A4 machines, run the command with selected device type. To create a cluster with A3 or A4 machines see [here](#provisioning-a3-ultra-a3-mega-and-a4-clusters-gpu-machines).
644
734
 
645
- ```shell
646
- python3 xpk.py workload create \
735
+
736
+ Machine | Device type
737
+ :- | :-
738
+ A3 Mega | `h100-mega-80gb-8`
739
+ A3 Ultra | `h200-141gb-8`
740
+ A4 | `b200-8`
741
+
742
+ ```shell
743
+ python3 xpk.py workload create \
647
744
  --workload=$WORKLOAD_NAME --command="echo goodbye" \
648
- --cluster=$CLUSTER_NAME --device-type=h200-141gb-8 \
745
+ --cluster=$CLUSTER_NAME --device-type DEVICE_TYPE \
649
746
  --zone=$COMPUTE_ZONE --project=$PROJECT_ID \
650
747
  --num-nodes=$WOKRKLOAD_NUM_NODES
651
- ```
652
- > The docker image flags/arguments introduced in [workloads section](#workload-create) can be used with A3 machines as well.
748
+ ```
749
+
750
+ > The docker image flags/arguments introduced in [workloads section](#workload-create) can be used with A3 or A4 machines as well.
653
751
 
654
- In order to run NCCL test on A3 Ultra machines check out [this guide](/examples/nccl/nccl.md).
752
+ In order to run NCCL test on A3 machines check out [this guide](/examples/nccl/nccl.md).
655
753
 
656
754
  ### Workload Priority and Preemption
657
755
  * Set the priority level of your workload with `--priority=LEVEL`
@@ -1498,4 +1596,5 @@ python xpk.py batch [other-options] --kind-cluster script
1498
1596
  Please note that all other xpk subcommands are intended for use with cloud systems on Google Cloud Engine (GCE) and don't support local testing. This includes commands like cluster, info, inspector, etc.
1499
1597
 
1500
1598
  # Other advanced usage
1501
- [Use a Jupyter notebook to interact with a Cloud TPU cluster](xpk-notebooks.md)
1599
+ [Use a Jupyter notebook to interact with a Cloud TPU cluster](xpk-notebooks.md) \
1600
+ [Use Slurm like commands in XPK to execute workloads on top of GKE](xpk-slurm-commands.md)
@@ -3,62 +3,64 @@ xpk/main.py,sha256=wFc_kIM7kALGIY-JOcoa8m4BCWNRjl5tQ6ZDpv7HpSU,2350
3
3
  xpk/api/__init__.py,sha256=YPwWBbgLAu7L-YlTVGB2r8ZV4TzypURMRBcehSHHlLY,561
4
4
  xpk/api/storage_crd.yaml,sha256=r4WFXnSJJ25EUF-t4Ljfbl-cJoSaiFiZkP8451eTub4,1260
5
5
  xpk/commands/__init__.py,sha256=YPwWBbgLAu7L-YlTVGB2r8ZV4TzypURMRBcehSHHlLY,561
6
- xpk/commands/batch.py,sha256=OZoH2WsHaff2tZNU5bRqqnQGfmC_U0CZDIECpanwH8A,3862
7
- xpk/commands/cluster.py,sha256=wF8pWeCwf6TtYxYaiaI1icDKXnGIDVYgi28FouciYQs,25097
8
- xpk/commands/cluster_gcluster.py,sha256=-4vcxnOyd2GMKHYR1LBUYS7zQR3uJr5l5NFgu9Z33yI,9179
9
- xpk/commands/common.py,sha256=ycvmnHoiM2gsY1DPDb2cwEB0YhDeAFCpHmd0jyvWGBo,1448
6
+ xpk/commands/batch.py,sha256=bSxpIZpbLVpgk3AjEaNOxCfKa376p9QjUws_fwPoF-A,3818
7
+ xpk/commands/cluster.py,sha256=2kSzuyftn2aQ_SCBf856W7MU8VMN9KikhsEogm80sHQ,30611
8
+ xpk/commands/cluster_gcluster.py,sha256=lfgNrCQgSzG2-u49goSl06-JlVpytjRHb99xn6Osfjc,9893
9
+ xpk/commands/common.py,sha256=oozWV7Uyjz-zr-dPZGJ4kV_ZNEIZrTjdI_jxmvjvpyE,2404
10
10
  xpk/commands/config.py,sha256=gFNkf3ibsvZmcPpkpKXe-KJmHO5IKucNwLCXNgKvaDc,836
11
- xpk/commands/info.py,sha256=ee_kwRLaLD4Hvw8155uK3oCdF9wQmoGsWwu7M1SjPkU,7338
11
+ xpk/commands/info.py,sha256=BHqFFXm3Lg1P8qH1Z3gEXmh141-8udduS5EBk38auDg,7251
12
12
  xpk/commands/inspector.py,sha256=bwbZW-cLtiBw2V0zvoMprHWhgMbAYm0ez0GjjEqeUR8,12097
13
- xpk/commands/job.py,sha256=luzLV7CSgXPUM8i1ZPh6n-YPj3w_O5dDoqUjWfdFvbc,5507
13
+ xpk/commands/job.py,sha256=LCFB_l1v5x_k4Ov15hPDAhadcvMZlqvHkObNNuHMCdo,5479
14
14
  xpk/commands/kind.py,sha256=Vl3RT47kHCR0ORX9dK37HCiYtbmXJUCIAaq-QEbIclU,7578
15
- xpk/commands/kjob_common.py,sha256=aR6k_6yacr76QZDQmdoPO0M4Tg6H7ZPooKUTnOVZwXY,1596
16
- xpk/commands/run.py,sha256=-W32sfobmwxLNEQzBKFWgPs_UOWljRKjFyH-Unm9zsA,3853
17
- xpk/commands/shell.py,sha256=ZODaPNSmWHOpW48eHEt35IoM4x-0GQUGaLjOxQ63QSY,4235
18
- xpk/commands/storage.py,sha256=oCo6iPHR9IL5IO7PjQrbB9_NDCjcCO3HYxMRuNpNqUU,8818
15
+ xpk/commands/kjob_common.py,sha256=WaXKKPGQV1bL4gXP9qduweBtFQXwbuOezynLHBOKYCI,1672
16
+ xpk/commands/run.py,sha256=RR9DVwS_DOs2_hfZ08qU98slz27u0wVNgW6UfWQqEAI,3806
17
+ xpk/commands/shell.py,sha256=AjJ-yANH02q3pncKQdI5v1fDRL0MsxNlMbxR4epS19I,4190
18
+ xpk/commands/storage.py,sha256=uKTjozRuebG_3VQ1FYtO7ZHFIv1H-kMLV0nve9Y38fo,10354
19
19
  xpk/commands/version.py,sha256=CU4mb71r66U28krnPAopC6vBpdK-IGclsy5uNaQcgRY,824
20
- xpk/commands/workload.py,sha256=N3hqe3tWuQMjGuk4DaiDoehgejYGYKwWXRygzJ58h-c,31710
20
+ xpk/commands/workload.py,sha256=CyqcgEQkSdEjj9UHGW7GbVTIiEdtV_O5QM7zQpLf8xg,25095
21
21
  xpk/core/__init__.py,sha256=YPwWBbgLAu7L-YlTVGB2r8ZV4TzypURMRBcehSHHlLY,561
22
- xpk/core/capacity.py,sha256=pli6McSdPgGJxsBfJNVk5lCjehp1s1WI82ETAvJT1_I,5365
23
- xpk/core/cluster.py,sha256=GPuasSTadvgmIb9Iw7giqw7FJDg0jEzIwbQLEkzjuvE,18352
22
+ xpk/core/capacity.py,sha256=tZEHoli-4YsIqwMdwlBRJxAl-xjUOls-z3HOAsy3Z1M,5393
23
+ xpk/core/cluster.py,sha256=ZF2W2OysxvdocRpnGU6fl4oEVhA5pWpehef3A8xP53E,24173
24
24
  xpk/core/cluster_private.py,sha256=J2-UZ6t5f-UNdcSkuUr2_f4xk6xyrMH9Muk56trBh4M,6657
25
25
  xpk/core/commands.py,sha256=JiS4vJqWSLu8MKFBIKPBea9MKD2ZdpaQrziVQBqiDr4,10719
26
- xpk/core/config.py,sha256=qFYohiDizy4NRgsY8V-OraKVOCqzaObtiLizGaHRFfA,5659
26
+ xpk/core/config.py,sha256=Hm_0aRqrowMkA14Jz_4FMmWlqGMbkpuIfzs6VRN-Mpc,5715
27
27
  xpk/core/docker_container.py,sha256=GvkCJ2S5UKn8uh3pZhRd3X7iS0-PsQpRO8l7QhywVGc,7604
28
28
  xpk/core/docker_image.py,sha256=fEdpLQg1C205mMbLREy48WnhvNv2Nm4KQFX87B2CuiA,6624
29
- xpk/core/docker_manager.py,sha256=_fE27tDCJPd9dUfswYoQMzZRMAMfxq6SxdFdOT-gzIQ,10566
30
- xpk/core/docker_resources.py,sha256=D4xqdBj7-ezSDNrb1DNVh4n8bzdBGSDfcDtqzXD84D8,11452
31
- xpk/core/filestore.py,sha256=mCyZ4K1ggUAMWSopLeeb3yBS2dluF8GrrRry1HdiACU,7997
29
+ xpk/core/docker_manager.py,sha256=GJMz1GLSdvIQeOGC34llVKSDIP5hjYuLcJtz1F7xNxA,10566
30
+ xpk/core/docker_resources.py,sha256=3esxpXnoF0FedJL05zKxnG4W3VtMF5cdhbJdRq4OBgc,11184
31
+ xpk/core/filestore.py,sha256=7M-HAiXR-gEu3BJUgRY3cqEIfjo_FL0BAxq9MljEBt4,8022
32
32
  xpk/core/gcloud_context.py,sha256=p_LhWHo7GZonear2oupvTO-DpKqEkL0St7PnfxieRDY,5866
33
33
  xpk/core/gcluster_manager.py,sha256=JFip2hInFczFP2h5AXa70IPIuTaJ475TG6GxkQjKOI8,6337
34
- xpk/core/gcsfuse.py,sha256=rYeylcVylqV8UfnVe1keJ2ZT70TtE13wHWV2sHMKsgQ,1591
35
- xpk/core/kjob.py,sha256=hI6A3ezW7AX_iQSI_CsdmCMTyW9FD_0Q7kut964xIzE,13859
36
- xpk/core/kueue.py,sha256=krmpMNFpLd5refP1xvrqWO3RXblohpwThoWxCNKG5IA,10097
34
+ xpk/core/gcsfuse.py,sha256=kg5pgxdTjgiqquuGjev9fXzJPb8oiWPTK6wzCddzheQ,2125
35
+ xpk/core/kjob.py,sha256=I-dbiOkslCNEMWSivcqy07t2ieDg5eYpPQdXeFjHhkI,14664
36
+ xpk/core/kueue.py,sha256=VdBFJPhWLCLZJZbtZkwXgbGNQR_LgzVgFVAsocCXVBI,10901
37
37
  xpk/core/monitoring.py,sha256=v9MvLzNfvJAVby_ehSlPe6PaO0_pf3shkXg5gd-UWm8,4338
38
- xpk/core/nap.py,sha256=BNO0fnTpza310cAVwITYktj1SN9tXVT_kCnsufKzYOE,12136
39
- xpk/core/network.py,sha256=kfvOJREHAm9JtGYdi6csnJeZNg81cjf5-5ECweZ6sWw,10478
40
- xpk/core/nodepool.py,sha256=1aBZXvaXWEXf2YJXj7w3NDQiPTLJ8b6cmizVPzeoVSY,22002
41
- xpk/core/pathways.py,sha256=OHJOpf0qbKGECjYD31TUJ4rT5SDgs9-AOtLWMGjBqxQ,11615
38
+ xpk/core/mtc.py,sha256=pO7p3l-EzLFdTE8MdwWV8i0Zu-7epGql_kPoksVofIU,6259
39
+ xpk/core/nap.py,sha256=30Fa1-xjbQCMAOj9L1t9K2X_O5Rauz0V7k1_qclci2o,12263
40
+ xpk/core/network.py,sha256=hQR5Kab5Q5CYjggriNIhNh78Aq0CCF-vPUQI6BC4Wgw,10537
41
+ xpk/core/nodepool.py,sha256=jkUWmAX7JJWocybH466_t-7KtfHpfulZhFN7-DprgEA,21758
42
+ xpk/core/pathways.py,sha256=NgrW4hoiSLM59h25R8Zi1a--TgDuiy_f7h2u6KXVz-o,10613
42
43
  xpk/core/ray.py,sha256=UxOpIc2enHi1fQ4h3KO8FH8bIyEMtYzGtPoeqJKGG4o,6337
43
- xpk/core/resources.py,sha256=IXzvuA8saK6Xvv4MHTWYVeWJDR3MbH_RScd-Dp_qxlM,7669
44
- xpk/core/scheduling.py,sha256=8BAg8YyftJULHeq-A5nmgpPYVjyEjbVjSG6cWYCAcX0,8348
45
- xpk/core/storage.py,sha256=oduGqythFOGIZhN9H-nixLn0Zt-aEZunyLG15XCSpqs,18100
46
- xpk/core/system_characteristics.py,sha256=6CwanJZ3jJCJAiVIr9QArBFIcYitt_YiJvb-K5nYjjk,31657
44
+ xpk/core/resources.py,sha256=uezEuHw2OzpM4LT2c2EjUCPr9lhBTfLnOPay7hGVyj4,8276
45
+ xpk/core/scheduling.py,sha256=OG1ZNS8tR29o1KIo8ijMaIuHsPeRfP23jfx4t3PkmGs,9157
46
+ xpk/core/storage.py,sha256=3MaTWjfBDW6uP707nG6fVL-R2yEti74DbB8DiJJj3e4,19628
47
+ xpk/core/system_characteristics.py,sha256=5GRzpKigAsVm7fzCtOs04Pi1UnurYu2KYFj-wdAZkVw,31836
47
48
  xpk/core/vertex.py,sha256=pD9UBL62xHomuqdNu7xKccfD2KCbjgohMk3AhX-CXSw,3644
48
- xpk/core/workload.py,sha256=-lWKkQHaMgc8lBlI-pVnNdz9k5KhuMWL53RDVP9mXl8,11611
49
+ xpk/core/workload.py,sha256=gD90rgztF8zWmcEKz8inC1yNhjL4KQVIQDiUCs-359g,10003
49
50
  xpk/core/blueprint/__init__.py,sha256=YPwWBbgLAu7L-YlTVGB2r8ZV4TzypURMRBcehSHHlLY,561
50
- xpk/core/blueprint/blueprint_definitions.py,sha256=tz2cL8mtRxQroa_EKvW5S6PZRuSezRqwrFcK0MaFyrg,1704
51
- xpk/core/blueprint/blueprint_generator.py,sha256=OpQ2vwUGDO73MRrUUg6td-tXg2mZHx7MmeWNUkRbN9k,24893
51
+ xpk/core/blueprint/blueprint_definitions.py,sha256=5i331XA-2yP_ALyB6XU5tP2Tf9iHcIX5g0TilxQi8zE,1800
52
+ xpk/core/blueprint/blueprint_generator.py,sha256=jTAg1Yig9BwS2l-o2IJtGZHeYU5KfvYzcdDrl7ZORhs,35337
52
53
  xpk/core/remote_state/__init__.py,sha256=PkV8D9WOtlJHH5AIxsQaKeIBcmupT_Ol_bwJgN6G2I8,561
53
54
  xpk/core/remote_state/fuse_remote_state.py,sha256=3Dx4ZZd0NFF5-MlqGWHzz8H4bjYiPOWdF_YSEnKUPQ8,3246
54
55
  xpk/core/remote_state/remote_state_client.py,sha256=6PcR92Xy_RMjlF4AscanQ1jXNHnewLWGNC2v53jbzD4,1077
55
56
  xpk/core/workload_decorators/__init__.py,sha256=YPwWBbgLAu7L-YlTVGB2r8ZV4TzypURMRBcehSHHlLY,561
56
- xpk/core/workload_decorators/rdma_decorator.py,sha256=7Ps8QKtDpjgQ04-ZLfNNKFv4wdYdZhjL5NWeZcsgL8E,3977
57
- xpk/core/workload_decorators/storage_decorator.py,sha256=KBt7zpcftczDZ_8a5Sy2MISrYcaH6Zknfbtro0Bmn_I,1737
58
- xpk/core/workload_decorators/tcpxo_decorator.py,sha256=pj-sTUgVcRTv_BvymeVBVV6SvPSKD4vSVop4o5FklpI,6156
57
+ xpk/core/workload_decorators/rdma_decorator.py,sha256=lLURBW6eVmFZw-o1BzaIpqVvE6th8P99bQJUNNhmrOY,3925
58
+ xpk/core/workload_decorators/storage_decorator.py,sha256=Bj1lRh65s40AJDsWM0xTiHFaWtKC272eImjIjN8Z38c,1967
59
+ xpk/core/workload_decorators/tcpx_decorator.py,sha256=rzOaufKdN8wgv-h22USdebJPFLGYIhjpzEs6WbmzJII,5666
60
+ xpk/core/workload_decorators/tcpxo_decorator.py,sha256=uwArPI9Lkre_0dtcO_oztDO7LU_yrfaSm_QjMUwzXLM,6302
59
61
  xpk/parser/__init__.py,sha256=YPwWBbgLAu7L-YlTVGB2r8ZV4TzypURMRBcehSHHlLY,561
60
62
  xpk/parser/batch.py,sha256=mJU-Cp1yTLje59vD-B1IiBcUeD-ZmEsoeB4xhj9cflc,1406
61
- xpk/parser/cluster.py,sha256=kEHq1zIfNCOnmf4cNTGCY0na7bylTmRZDAjjuRj7TkI,22196
63
+ xpk/parser/cluster.py,sha256=hExmC_SFvs9MnLihysAGtIG9t091_gw3J75-zjL8uCs,27864
62
64
  xpk/parser/common.py,sha256=_F2rwsZka15difkvPA1yPARWr9I9ewx8PMzgwMLTvjM,7220
63
65
  xpk/parser/config.py,sha256=-XnWx9aFsBW4Uzo_hpOMD2ZQ0bdZLvq1ksv83_5jqSM,1633
64
66
  xpk/parser/core.py,sha256=VRJerlS92ufoQbG1mZv7B04DAP4qGkBHa4pRXgcbAs0,4761
@@ -68,25 +70,25 @@ xpk/parser/job.py,sha256=5RdE70rucGfrsn65l7Ho6RmO06mag1S0AO-3saVuXyw,4328
68
70
  xpk/parser/kind.py,sha256=sgPCqNVrgmFLcOBEbhlaphwVXxMh_opP9ntCq4KPePE,2682
69
71
  xpk/parser/run.py,sha256=oi_ksSyJ8Ooffe2EgoV_ecpmXEmNGVotjpIQH-HjufE,1481
70
72
  xpk/parser/shell.py,sha256=VC8p-kz9XjJZW9DXZ-rnv41XnRDRpQRFywHpB5j7tfc,1970
71
- xpk/parser/storage.py,sha256=2CLL7TW2rclAtxk0klQmouR-BWoLUcEYLa6ZvIkHRs0,9258
73
+ xpk/parser/storage.py,sha256=Vtl9KxWFOxoNQmbfMBN0Nwc4Z3Nasx68td3tUmAgkuI,9894
72
74
  xpk/parser/validators.py,sha256=-NBZelvfwZRzjz-YUCreD8EzMLHll8PZM-d-MVm2PG4,1192
73
75
  xpk/parser/version.py,sha256=eJo4PAbbmRQZulgKBs_ytbVgV9zAaaXeNzMMxmgFMVY,769
74
- xpk/parser/workload.py,sha256=GNcJEOvldVHKZPIO6cXAIXMpyHq2M9kdOJ7CZP86saU,24177
76
+ xpk/parser/workload.py,sha256=hqmy3KtR0Byhrn25Qo72K_2rUyIF4oujrtibp5mq7Lc,24958
75
77
  xpk/templates/__init__.py,sha256=7mu-VQDQMyxM5To0KOhuYe4y2TYGsEkfV7hXZmUyih4,561
76
78
  xpk/templates/storage.yaml,sha256=AykdyMtDnKZF8Y_0BYxoYP03hEIzEk6iNalXAQHgAls,163
77
79
  xpk/utils/__init__.py,sha256=YPwWBbgLAu7L-YlTVGB2r8ZV4TzypURMRBcehSHHlLY,561
78
80
  xpk/utils/console.py,sha256=bKibWIswcB1aWGZp0ZpL-NEhvTrxJMy7wWD4-3BVTKI,1479
79
81
  xpk/utils/file.py,sha256=jlv2o4ah9UmWJ7NuOCnTwtMZFLerOATBIMQeQ03-kIw,2142
80
82
  xpk/utils/gcs_utils.py,sha256=zg-XSTv4G4TFjeT2bNBm2WLdDXPrOZi0rNv_JdppNg4,4113
81
- xpk/utils/kubectl.py,sha256=-CyxSMTXMq05S0D53tp2Ue9j0UIpWgyEv8p7QJ2b1Ic,1758
83
+ xpk/utils/kubectl.py,sha256=WKB9UhpouPN9G4n2ejRi_PgsYLI0R01gzkS1WGU6mJA,1828
82
84
  xpk/utils/network.py,sha256=AAm9qGGFAEfAh1FK39muBheXAo7tdBlxR0A8Tg0TyYQ,4205
83
85
  xpk/utils/objects.py,sha256=OwMNxB4TGX21qnJPdZo2YBMPMbQPqOtHMh19QhoRNRY,2498
84
86
  xpk/utils/templates.py,sha256=g8zgR1MxyJmTmzM_wnvH30FmcbgQMC47UQwBtLj8B9k,807
85
87
  xpk/utils/validation.py,sha256=bSJApIY0Lk48I4EEQP08ZUvolXt_APpYXVGJXFQ_YLA,2711
86
88
  xpk/utils/yaml.py,sha256=j8xuAJ9yAAwnQi6ozwZ-nMnDyDnc3xWkeBZMtSuP4RU,844
87
- xpk-0.7.2.dist-info/licenses/LICENSE,sha256=z8d0m5b2O9McPEK1xHG_dWgUBT6EfBDz6wA0F7xSPTA,11358
88
- xpk-0.7.2.dist-info/METADATA,sha256=rzXsUzC86TBMyfXBRMKTOHiwX_e_wA6V8Lb0giiylBw,63800
89
- xpk-0.7.2.dist-info/WHEEL,sha256=CmyFI0kx5cdEMTLiONQRbGQwjIoR1aIYB7eCAQ4KPJ0,91
90
- xpk-0.7.2.dist-info/entry_points.txt,sha256=mzEtiIesFkT1kmcTUVDA1o3uOhiniX6tIz2wmOlMu1M,38
91
- xpk-0.7.2.dist-info/top_level.txt,sha256=aDe4N0jicmuWExx_6w0TxWQJaEuPSs9BnLU-3aF1GLo,4
92
- xpk-0.7.2.dist-info/RECORD,,
89
+ xpk-0.9.0.dist-info/licenses/LICENSE,sha256=z8d0m5b2O9McPEK1xHG_dWgUBT6EfBDz6wA0F7xSPTA,11358
90
+ xpk-0.9.0.dist-info/METADATA,sha256=NcdCQuIRdfrvXobp08SBa-96KXFSc7zs23UE1VW9_Vo,69675
91
+ xpk-0.9.0.dist-info/WHEEL,sha256=_zCd3N1l69ArxyTb8rzEoP9TpbYXkqRFSNOD5OuxnTs,91
92
+ xpk-0.9.0.dist-info/entry_points.txt,sha256=mzEtiIesFkT1kmcTUVDA1o3uOhiniX6tIz2wmOlMu1M,38
93
+ xpk-0.9.0.dist-info/top_level.txt,sha256=aDe4N0jicmuWExx_6w0TxWQJaEuPSs9BnLU-3aF1GLo,4
94
+ xpk-0.9.0.dist-info/RECORD,,
@@ -1,5 +1,5 @@
1
1
  Wheel-Version: 1.0
2
- Generator: setuptools (78.1.0)
2
+ Generator: setuptools (80.9.0)
3
3
  Root-Is-Purelib: true
4
4
  Tag: py3-none-any
5
5