gmicloud 0.1.6__py3-none-any.whl → 0.1.9__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,264 @@
1
+ Metadata-Version: 2.4
2
+ Name: gmicloud
3
+ Version: 0.1.9
4
+ Summary: GMI Cloud Python SDK
5
+ Author-email: GMI <gmi@gmitec.net>
6
+ License: MIT
7
+ Classifier: Programming Language :: Python :: 3
8
+ Classifier: License :: OSI Approved :: MIT License
9
+ Classifier: Operating System :: OS Independent
10
+ Requires-Python: >=3.6
11
+ Description-Content-Type: text/markdown
12
+
13
+ # GMICloud SDK
14
+
15
+ ## Overview
16
+ Before you start: Our service and GPU resource is currenly invite-only so please contact our team (getstarted@gmicloud.ai) to get invited if you don't have one yet.
17
+
18
+ The GMI Inference Engine SDK provides a Python interface for deploying and managing machine learning models in production environments. It allows users to create model artifacts, schedule tasks for serving models, and call inference APIs easily.
19
+
20
+ This SDK streamlines the process of utilizing GMI Cloud capabilities such as deploying models with Kubernetes-based Ray services, managing resources automatically, and accessing model inference endpoints. With minimal setup, developers can focus on building ML solutions instead of infrastructure.
21
+
22
+ ## Features
23
+
24
+ - Artifact Management: Easily create, update, and manage ML model artifacts.
25
+ - Task Management: Quickly create, schedule, and manage deployment tasks for model inference.
26
+ - Usage Data Retrieval : Fetch and analyze usage data to optimize resource allocation.
27
+
28
+ ## Installation
29
+
30
+ To install the SDK, use pip:
31
+
32
+ ```bash
33
+ pip install gmicloud
34
+ ```
35
+
36
+ ## Setup
37
+
38
+ You must configure authentication credentials for accessing the GMI Cloud API.
39
+ To create account and get log in info please visit **GMI inference platform: https://inference-engine.gmicloud.ai/**.
40
+
41
+ There are two ways to configure the SDK:
42
+
43
+ ### Option 1: Using Environment Variables
44
+
45
+ Set the following environment variables:
46
+
47
+ ```shell
48
+ export GMI_CLOUD_CLIENT_ID=<YOUR_CLIENT_ID> # Pick what every ID you need.
49
+ export GMI_CLOUD_EMAIL=<YOUR_EMAIL>
50
+ export GMI_CLOUD_PASSWORD=<YOUR_PASSWORD>
51
+ ```
52
+
53
+ ### Option 2: Passing Credentials as Parameters
54
+
55
+ Pass `client_id`, `email`, and `password` directly to the Client object when initializing it in your script:
56
+
57
+ ```python
58
+ from gmicloud import Client
59
+
60
+ client = Client(client_id="<YOUR_CLIENT_ID>", email="<YOUR_EMAIL>", password="<YOUR_PASSWORD>")
61
+ ```
62
+
63
+ ## Quick Start
64
+
65
+ ### 1. How to run the code in the example folder
66
+ ```bash
67
+ cd path/to/gmicloud-sdk
68
+ # Create a virtual environment
69
+ python -m venv venv
70
+ source venv/bin/activate
71
+
72
+ pip install -r requirements.txt
73
+ python -m examples.create_task_from_artifact_template.py
74
+ ```
75
+
76
+ ### 2. Example of create an inference task from an artifact template
77
+
78
+ This is the simplest example to deploy an inference task using an existing artifact template:
79
+
80
+ Up-to-date code in /examples/create_task_from_artifact_template.py
81
+
82
+ ```python
83
+ from datetime import datetime
84
+ import os
85
+ import sys
86
+
87
+ from gmicloud import *
88
+ from examples.completion import call_chat_completion
89
+
90
+ cli = Client()
91
+
92
+ # List templates offered by GMI cloud
93
+ templates = cli.list_templates()
94
+ print(f"Found {len(templates)} templates: {templates}")
95
+
96
+ # Pick a template from the list
97
+ pick_template = "Llama-3.1-8B"
98
+
99
+ # Create Artifact from template
100
+ artifact_id, recommended_replica_resources = cli.create_artifact_from_template(templates[0])
101
+ print(f"Created artifact {artifact_id} with recommended replica resources: {recommended_replica_resources}")
102
+
103
+ # Create Task based on Artifact
104
+ task_id = cli.create_task(artifact_id, recommended_replica_resources, TaskScheduling(
105
+ scheduling_oneoff=OneOffScheduling(
106
+ trigger_timestamp=int(datetime.now().timestamp()),
107
+ min_replicas=1,
108
+ max_replicas=1,
109
+ )
110
+ ))
111
+ task = cli.task_manager.get_task(task_id)
112
+ print(f"Task created: {task.config.task_name}. You can check details at https://inference-engine.gmicloud.ai/user-console/task")
113
+
114
+ # Start Task and wait for it to be ready
115
+ cli.start_task_and_wait(task.task_id)
116
+
117
+ # Testing with calling chat completion
118
+ print(call_chat_completion(cli, task.task_id))
119
+
120
+ ```
121
+
122
+ ### 3. Example of creating an inference task based on custom model with local vllm / SGLang serve command
123
+ * Full example is available at [examples/inference_task_with_custom_model.py](https://github.com/GMISWE/python-sdk/blob/main/examples/inference_task_with_custom_model.py)
124
+
125
+ 1. Prepare custom model checkpoint (using a model downloaded from HF as an example)
126
+
127
+ ```python
128
+ # Download model from huggingface
129
+ from huggingface_hub import snapshot_download
130
+
131
+ model_name = "deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B"
132
+ model_checkpoint_save_dir = "files/model_garden"
133
+ snapshot_download(repo_id=model_name, local_dir=model_checkpoint_save_dir)
134
+ ```
135
+
136
+ #### Pre-downloaded models
137
+ ```
138
+ "deepseek-ai/DeepSeek-R1"
139
+ "deepseek-ai/DeepSeek-V3-0324"
140
+ "deepseek-ai/DeepSeek-R1-Distill-Llama-70B"
141
+ "deepseek-ai/DeepSeek-R1-Distill-Llama-8B"
142
+ "deepseek-ai/DeepSeek-R1-Distill-Qwen-14B"
143
+ "deepseek-ai/DeepSeek-R1-Distill-Qwen-32B"
144
+ "deepseek-ai/DeepSeek-R1-Distill-Qwen-7B"
145
+ "meta-llama/Llama-3.3-70B-Instruct"
146
+ "meta-llama/Llama-4-Maverick-17B-128E-Instruct"
147
+ "meta-llama/Llama-4-Scout-17B-16E-Instruct"
148
+ "Qwen/QwQ-32B"
149
+ ```
150
+
151
+ 2. Find a template of specific vllm or SGLang version
152
+
153
+ ```python
154
+ # export GMI_CLOUD_CLIENT_ID=<YOUR_CLIENT_ID>
155
+ # export GMI_CLOUD_EMAIL=<YOUR_EMAIL>
156
+ # export GMI_CLOUD_PASSWORD=<YOUR_PASSWORD>
157
+ cli = Client()
158
+
159
+ # List templates offered by GMI cloud
160
+ templates = cli.artifact_manager.list_public_template_names()
161
+ print(f"Found {len(templates)} templates: {templates}")
162
+ ```
163
+
164
+ 3. Pick a template (e.g. SGLang 0.4.5) and prepare a local serve command
165
+
166
+ ```python
167
+ # Example for vllm server
168
+ picked_template_name = "gmi_vllm_0.8.4"
169
+ serve_command = "vllm serve deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B --trust-remote-code --gpu-memory-utilization 0.8"
170
+
171
+ # Example for sglang server
172
+ picked_template_name = "gmi_sglang_0.4.5.post1"
173
+ serve_command = "python3 -m sglang.launch_server --model-path deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B --trust-remote-code --mem-fraction-static 0.8 --tp 2"
174
+ ```
175
+
176
+ 4. Create an artifact. you can pass `pre_download_model` parameter. If you want custom model, upload model checkpoint to the artifactThe artifact can be reused to create inference tasks later. Artifact also suggests recommended resources for each inference server replica
177
+
178
+ ```python
179
+ artifact_name = "artifact_hello_world"
180
+ artifact_id, recommended_replica_resources = cli.artifact_manager.create_artifact_for_serve_command_and_custom_model(
181
+ template_name=picked_template_name,
182
+ artifact_name=artifact_name,
183
+ serve_command=serve_command,
184
+ gpu_type="H100",
185
+ artifact_description="This is a test artifact",
186
+ pre_download_model=pick_pre_downloaded_model,
187
+ )
188
+ print(f"Created artifact {artifact_id} with recommended resources: {recommended_replica_resources}")
189
+ ```
190
+
191
+ Alternatively, Upload a custom model checkpoint to artifact
192
+ ```python
193
+ cli.artifact_manager.upload_model_files_to_artifact(artifact_id, model_checkpoint_save_dir)
194
+
195
+ # Maybe Wait 10 minutes for the artifact to be ready
196
+ time.sleep(10 * 60)
197
+ ```
198
+
199
+ 5. Create Inference task (defining min/max inference replica), start and wait
200
+
201
+ ```python
202
+ # Create Task based on Artifact
203
+ new_task_id = cli.task_manager.create_task_from_artifact_id(artifact_id, recommended_replica_resources, TaskScheduling(
204
+ scheduling_oneoff=OneOffScheduling(
205
+ trigger_timestamp=int(datetime.now().timestamp()),
206
+ min_replicas=1,
207
+ max_replicas=4,
208
+ )
209
+ ))
210
+ task = cli.task_manager.get_task(new_task_id)
211
+ print(f"Task created: {task.config.task_name}. You can check details at https://inference-engine.gmicloud.ai/user-console/task")
212
+
213
+ # Start Task and wait for it to be ready
214
+ cli.task_manager.start_task_and_wait(new_task_id)
215
+ ```
216
+
217
+ 6. Test with sample chat completion request with OpenAI client
218
+
219
+ ```python
220
+ pi_key = "<YOUR_API_KEY>"
221
+ endpoint_url = cli.task_manager.get_task_endpoint_url(new_task_id)
222
+ open_ai = OpenAI(
223
+ base_url=os.getenv("OPENAI_API_BASE", f"https://{endpoint_url}/serve/v1/"),
224
+ api_key=api_key
225
+ )
226
+ # Make a chat completion request using the new OpenAI client.
227
+ completion = open_ai.chat.completions.create(
228
+ model=picked_template_name,
229
+ messages=[
230
+ {"role": "system", "content": "You are a helpful assistant."},
231
+ {"role": "user", "content": "Who are you?"},
232
+ ],
233
+ max_tokens=500,
234
+ temperature=0.7
235
+ )
236
+ print(completion.choices[0].message.content)
237
+ ```
238
+
239
+
240
+ ## API Reference
241
+
242
+ ### Client
243
+
244
+ Represents the entry point to interact with GMI Cloud APIs.
245
+ Client(
246
+ client_id: Optional[str] = "",
247
+ email: Optional[str] = "",
248
+ password: Optional[str] = ""
249
+ )
250
+
251
+ ### Artifact Management
252
+
253
+ * get_artifact_templates(): Fetch a list of available artifact templates.
254
+ * create_artifact_from_template(template_id: str): Create a model artifact from a given template.
255
+ * get_artifact(artifact_id: str): Get details of a specific artifact.
256
+
257
+ ### Task Management
258
+
259
+ * create_task_from_artifact_template(template_id: str, scheduling: TaskScheduling): Create and schedule a task using an
260
+ artifact template.
261
+ * start_task(task_id: str): Start a task.
262
+ * get_task(task_id: str): Retrieve the status and details of a specific task.
263
+
264
+ ## Notes & Troubleshooting
@@ -0,0 +1,31 @@
1
+ gmicloud/__init__.py,sha256=xSzrAxiby5Te20yhy1ZylGHmQKVV_w1QjFe6D99VZxw,968
2
+ gmicloud/client.py,sha256=nTMrKhyrGSx9qUDTice2HqmIqlIlsuKoxHnb0T-Ls3c,10947
3
+ gmicloud/_internal/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
4
+ gmicloud/_internal/_config.py,sha256=BenHiCnedpHA5phz49UWBXa1mg_q9W8zYs7A8esqGcU,494
5
+ gmicloud/_internal/_constants.py,sha256=Y085dwFlqdFkCf39iBfxz39QiiB7lX59ayNJjB86_m4,378
6
+ gmicloud/_internal/_enums.py,sha256=aN3At0_iV_6aaUsrOy-JThtRUokeY4nTyxxPLZmIDBU,1093
7
+ gmicloud/_internal/_exceptions.py,sha256=hScBq7n2fOit4_umlkabZJchY8zVbWSRfWM2Y0rLCbw,306
8
+ gmicloud/_internal/_models.py,sha256=iSRHMUPx_iXEraSg3ouAIM4ipVXQop3MuCGJFvFvMLY,25011
9
+ gmicloud/_internal/_client/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
10
+ gmicloud/_internal/_client/_artifact_client.py,sha256=0lyHAdUybN8A1mEwZ7p1yK2yQEyoDG2vTB4Qe5RI2ik,9974
11
+ gmicloud/_internal/_client/_auth_config.py,sha256=zlCUPHN_FgWmOAxOAgjBtGRbaChqMa9PPGPuVNKvnc8,2700
12
+ gmicloud/_internal/_client/_decorator.py,sha256=sy4gxzsUB6ORXHw5pqmMf7TTlK41Nmu1fhIhK2AIsbY,670
13
+ gmicloud/_internal/_client/_file_upload_client.py,sha256=r29iXG_0DOi-uTLu9plpfZMWGqOck_AdDHJZprcf8uI,4918
14
+ gmicloud/_internal/_client/_http_client.py,sha256=j--3emTjJ_l9CTdnkTbcpf7gYcUEl341pv2O5cU67l0,5741
15
+ gmicloud/_internal/_client/_iam_client.py,sha256=iXam-UlTCJWCpXmxAhqCo0J2m6nPzNOWa06R5xAy5nQ,8297
16
+ gmicloud/_internal/_client/_task_client.py,sha256=69OqZC_kwSDkTSVVyi51Tn_OyUV6R0nin4z4gLfZ-Lg,6141
17
+ gmicloud/_internal/_client/_video_client.py,sha256=bjSmChBydGXwuVIm37ltKGmduPJa-H0Bjyc-qhd_PZI,4694
18
+ gmicloud/_internal/_manager/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
19
+ gmicloud/_internal/_manager/_artifact_manager.py,sha256=Fq5Qifrdq5yn_QkMAoykuWE04FgqNOd9yZrFQdAi5J8,21874
20
+ gmicloud/_internal/_manager/_iam_manager.py,sha256=nAqPCaUfSXTnx2MEQa8e0YUOBFYWDRiETgK1PImdf4o,1167
21
+ gmicloud/_internal/_manager/_task_manager.py,sha256=g2K0IG1EXzcZRAfXLhUp78em0ZVvKyqlr1PGTBR04JQ,12501
22
+ gmicloud/_internal/_manager/_video_manager.py,sha256=_PwooKf9sZkIx4mYTy57pXtP7J3uwHQHgscns5hQYZ0,3376
23
+ gmicloud/_internal/_manager/serve_command_utils.py,sha256=0PXDRuGbLw_43KBwCxPRdb4QqijZrzYyvM6WOZ2-Ktg,4583
24
+ gmicloud/tests/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
25
+ gmicloud/tests/test_artifacts.py,sha256=w0T0EpATIGLrSUPaBfTZ2ZC_X2XeaTlFEi3DZ4evIcE,15825
26
+ gmicloud/tests/test_tasks.py,sha256=yL-aFf80ShgTyxEONTWh-xbWDf5XnUNtIeA5hYvhKM0,10963
27
+ gmicloud/utils/uninstall_packages.py,sha256=zzuuaJPf39oTXWZ_7tUAGseoxocuCbbkoglJSD5yDrE,1127
28
+ gmicloud-0.1.9.dist-info/METADATA,sha256=sZlrvpl2xiwBoVJj79IQ0JIFXg8md9mCmA13P99dXj0,9028
29
+ gmicloud-0.1.9.dist-info/WHEEL,sha256=_zCd3N1l69ArxyTb8rzEoP9TpbYXkqRFSNOD5OuxnTs,91
30
+ gmicloud-0.1.9.dist-info/top_level.txt,sha256=AZimLw3y0WPpLiSiOidZ1gD0dxALh-jQNk4fxC05hYE,9
31
+ gmicloud-0.1.9.dist-info/RECORD,,
@@ -1,5 +1,5 @@
1
1
  Wheel-Version: 1.0
2
- Generator: setuptools (76.0.0)
2
+ Generator: setuptools (80.9.0)
3
3
  Root-Is-Purelib: true
4
4
  Tag: py3-none-any
5
5
 
@@ -1,147 +0,0 @@
1
- Metadata-Version: 2.2
2
- Name: gmicloud
3
- Version: 0.1.6
4
- Summary: GMI Cloud Python SDK
5
- Author-email: GMI <support@gmicloud.ai>
6
- License: MIT
7
- Classifier: Programming Language :: Python :: 3
8
- Classifier: License :: OSI Approved :: MIT License
9
- Classifier: Operating System :: OS Independent
10
- Requires-Python: >=3.6
11
- Description-Content-Type: text/markdown
12
-
13
- # GMICloud SDK (Beta)
14
-
15
- ## Overview
16
- Before you start: Our service and GPU resource is currenly invite-only so please contact our team (getstarted@gmicloud.ai) to get invited if you don't have one yet.
17
-
18
- The GMI Inference Engine SDK provides a Python interface for deploying and managing machine learning models in production environments. It allows users to create model artifacts, schedule tasks for serving models, and call inference APIs easily.
19
-
20
- This SDK streamlines the process of utilizing GMI Cloud capabilities such as deploying models with Kubernetes-based Ray services, managing resources automatically, and accessing model inference endpoints. With minimal setup, developers can focus on building ML solutions instead of infrastructure.
21
-
22
- ## Features
23
-
24
- - Artifact Management: Easily create, update, and manage ML model artifacts.
25
- - Task Management: Quickly create, schedule, and manage deployment tasks for model inference.
26
- - Usage Data Retrieval : Fetch and analyze usage data to optimize resource allocation.
27
-
28
- ## Installation
29
-
30
- To install the SDK, use pip:
31
-
32
- ```bash
33
- pip install gmicloud
34
- ```
35
-
36
- ## Setup
37
-
38
- You must configure authentication credentials for accessing the GMI Cloud API.
39
- To create account and get log in info please visit **GMI inference platform: https://inference-engine.gmicloud.ai/**.
40
-
41
- There are two ways to configure the SDK:
42
-
43
- ### Option 1: Using Environment Variables
44
-
45
- Set the following environment variables:
46
-
47
- ```shell
48
- export GMI_CLOUD_CLIENT_ID=<YOUR_CLIENT_ID>
49
- export GMI_CLOUD_EMAIL=<YOUR_EMAIL>
50
- export GMI_CLOUD_PASSWORD=<YOUR_PASSWORD>
51
- ```
52
-
53
- ### Option 2: Passing Credentials as Parameters
54
-
55
- Pass `client_id`, `email`, and `password` directly to the Client object when initializing it in your script:
56
-
57
- ```python
58
- from gmicloud import Client
59
-
60
- client = Client(client_id="<YOUR_CLIENT_ID>", email="<YOUR_EMAIL>", password="<YOUR_PASSWORD>")
61
- ```
62
-
63
- ## Quick Start
64
-
65
- ### 1. How to run the code in the example folder
66
- ```bash
67
- cd path/to/gmicloud-sdk
68
- # Create a virtual environment
69
- python -m venv venv
70
- source venv/bin/activate
71
-
72
- pip install -r requirements.txt
73
- python -m examples.create_task_from_artifact_template.py
74
- ```
75
-
76
- ### 2. Create an inference task from an artifact template
77
-
78
- This is the simplest example to deploy an inference task using an existing artifact template:
79
-
80
- Up-to-date code in /examples/create_task_from_artifact_template.py
81
-
82
- ```python
83
- from datetime import datetime
84
- import os
85
- import sys
86
-
87
- from gmicloud import *
88
- from examples.completion import call_chat_completion
89
-
90
- cli = Client()
91
-
92
- # List templates offered by GMI cloud
93
- templates = cli.list_templates()
94
- print(f"Found {len(templates)} templates: {templates}")
95
-
96
- # Pick a template from the list
97
- pick_template = "Llama-3.1-8B"
98
-
99
- # Create Artifact from template
100
- artifact_id, recommended_replica_resources = cli.create_artifact_from_template(templates[0])
101
- print(f"Created artifact {artifact_id} with recommended replica resources: {recommended_replica_resources}")
102
-
103
- # Create Task based on Artifact
104
- task_id = cli.create_task(artifact_id, recommended_replica_resources, TaskScheduling(
105
- scheduling_oneoff=OneOffScheduling(
106
- trigger_timestamp=int(datetime.now().timestamp()),
107
- min_replicas=1,
108
- max_replicas=1,
109
- )
110
- ))
111
- task = cli.task_manager.get_task(task_id)
112
- print(f"Task created: {task.config.task_name}. You can check details at https://inference-engine.gmicloud.ai/user-console/task")
113
-
114
- # Start Task and wait for it to be ready
115
- cli.start_task_and_wait(task.task_id)
116
-
117
- # Testing with calling chat completion
118
- print(call_chat_completion(cli, task.task_id))
119
-
120
- ```
121
-
122
- ## API Reference
123
-
124
- ### Client
125
-
126
- Represents the entry point to interact with GMI Cloud APIs.
127
- Client(
128
- client_id: Optional[str] = "",
129
- email: Optional[str] = "",
130
- password: Optional[str] = ""
131
- )
132
-
133
- ### Artifact Management
134
-
135
- * get_artifact_templates(): Fetch a list of available artifact templates.
136
- * create_artifact_from_template(template_id: str): Create a model artifact from a given template.
137
- * get_artifact(artifact_id: str): Get details of a specific artifact.
138
-
139
- ### Task Management
140
-
141
- * create_task_from_artifact_template(template_id: str, scheduling: TaskScheduling): Create and schedule a task using an
142
- artifact template.
143
- * start_task(task_id: str): Start a task.
144
- * get_task(task_id: str): Retrieve the status and details of a specific task.
145
-
146
- ## Notes & Troubleshooting
147
- k
@@ -1,27 +0,0 @@
1
- gmicloud/__init__.py,sha256=aIgu4MAw4nExv781-pzSZLG8MscqAMZ5lM5fGyqg7QU,984
2
- gmicloud/client.py,sha256=G0tD0xQnpqDKS-3l-AAU-K3FAHOsqsTzsAq2NVxiamY,10539
3
- gmicloud/_internal/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
4
- gmicloud/_internal/_config.py,sha256=qIH76TSyS3MQWe62LHI46RJhDnklNFisdajY75oUAqE,218
5
- gmicloud/_internal/_constants.py,sha256=Y085dwFlqdFkCf39iBfxz39QiiB7lX59ayNJjB86_m4,378
6
- gmicloud/_internal/_enums.py,sha256=5d6Z8TFJYCmhNI1TDbPpBbG1tNe96StIEH4tEw20RZk,789
7
- gmicloud/_internal/_exceptions.py,sha256=hScBq7n2fOit4_umlkabZJchY8zVbWSRfWM2Y0rLCbw,306
8
- gmicloud/_internal/_models.py,sha256=eArBzdhiMosLVZVUyoE_mvfxRS8yKPkuqhlDaa57Iog,17863
9
- gmicloud/_internal/_client/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
10
- gmicloud/_internal/_client/_artifact_client.py,sha256=-CyMdTauVovuv3whs8yUqmv3-WW2e9m2GoEG9D6eNbc,8374
11
- gmicloud/_internal/_client/_decorator.py,sha256=sy4gxzsUB6ORXHw5pqmMf7TTlK41Nmu1fhIhK2AIsbY,670
12
- gmicloud/_internal/_client/_file_upload_client.py,sha256=1JRs4X57S3EScPIP9w2DC1Uo6_Wbcjumcw3nVM7uIGM,4667
13
- gmicloud/_internal/_client/_http_client.py,sha256=j--3emTjJ_l9CTdnkTbcpf7gYcUEl341pv2O5cU67l0,5741
14
- gmicloud/_internal/_client/_iam_client.py,sha256=pgOXIqp9aJvcIUCEVkYPEyMUyxBftecojHAbs8Gbl94,7013
15
- gmicloud/_internal/_client/_task_client.py,sha256=69OqZC_kwSDkTSVVyi51Tn_OyUV6R0nin4z4gLfZ-Lg,6141
16
- gmicloud/_internal/_manager/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
17
- gmicloud/_internal/_manager/_artifact_manager.py,sha256=TBvGps__Kk1Ym7jztY3tNZ3XomKPrDIFPV7XyyLwHuw,15941
18
- gmicloud/_internal/_manager/_iam_manager.py,sha256=nAqPCaUfSXTnx2MEQa8e0YUOBFYWDRiETgK1PImdf4o,1167
19
- gmicloud/_internal/_manager/_task_manager.py,sha256=YDUcAdRkJhGumA1LLfpXfYs6jmLnev8P27UItPZHUBs,11268
20
- gmicloud/tests/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
21
- gmicloud/tests/test_artifacts.py,sha256=q1jiTk5DN4G3LCLCO_8KbWArdc6RG3sETe1MCEt-vbI,16979
22
- gmicloud/tests/test_tasks.py,sha256=yL-aFf80ShgTyxEONTWh-xbWDf5XnUNtIeA5hYvhKM0,10963
23
- gmicloud/utils/uninstall_packages.py,sha256=zzuuaJPf39oTXWZ_7tUAGseoxocuCbbkoglJSD5yDrE,1127
24
- gmicloud-0.1.6.dist-info/METADATA,sha256=rqwbl1_3RfzhdBpn9eb3u1My3pk10k7T3r23oEiTshY,4675
25
- gmicloud-0.1.6.dist-info/WHEEL,sha256=52BFRY2Up02UkjOa29eZOS2VxUrpPORXg1pkohGGUS8,91
26
- gmicloud-0.1.6.dist-info/top_level.txt,sha256=AZimLw3y0WPpLiSiOidZ1gD0dxALh-jQNk4fxC05hYE,9
27
- gmicloud-0.1.6.dist-info/RECORD,,