vec-inf 0.4.1__py3-none-any.whl → 0.6.0__py3-none-any.whl
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- vec_inf/README.md +3 -3
- vec_inf/cli/_cli.py +227 -325
- vec_inf/cli/_helper.py +400 -0
- vec_inf/cli/_utils.py +26 -135
- vec_inf/cli/_vars.py +32 -0
- vec_inf/client/__init__.py +31 -0
- vec_inf/client/_client_vars.py +213 -0
- vec_inf/client/_exceptions.py +37 -0
- vec_inf/client/_helper.py +674 -0
- vec_inf/client/_slurm_script_generator.py +179 -0
- vec_inf/client/_utils.py +287 -0
- vec_inf/client/api.py +302 -0
- vec_inf/client/config.py +128 -0
- vec_inf/client/models.py +225 -0
- vec_inf/client/slurm_vars.py +49 -0
- vec_inf/{models → config}/README.md +30 -12
- vec_inf/config/models.yaml +1300 -0
- vec_inf-0.6.0.dist-info/METADATA +193 -0
- vec_inf-0.6.0.dist-info/RECORD +25 -0
- vec_inf/launch_server.sh +0 -145
- vec_inf/models/models.csv +0 -85
- vec_inf/multinode_vllm.slurm +0 -124
- vec_inf/vllm.slurm +0 -59
- vec_inf-0.4.1.dist-info/METADATA +0 -121
- vec_inf-0.4.1.dist-info/RECORD +0 -16
- {vec_inf-0.4.1.dist-info → vec_inf-0.6.0.dist-info}/WHEEL +0 -0
- {vec_inf-0.4.1.dist-info → vec_inf-0.6.0.dist-info}/entry_points.txt +0 -0
- {vec_inf-0.4.1.dist-info → vec_inf-0.6.0.dist-info}/licenses/LICENSE +0 -0
|
@@ -24,12 +24,6 @@ More profiling metrics coming soon!
|
|
|
24
24
|
| [`CodeLlama-70b-hf`](https://huggingface.co/meta-llama/CodeLlama-70b-hf) | 4x a40 | - tokens/s | - tokens/s |
|
|
25
25
|
| [`CodeLlama-70b-Instruct-hf`](https://huggingface.co/meta-llama/CodeLlama-70b-Instruct-hf) | 4x a40 | - tokens/s | - tokens/s |
|
|
26
26
|
|
|
27
|
-
### [Databricks: DBRX](https://huggingface.co/collections/databricks/dbrx-6601c0852a0cdd3c59f71962)
|
|
28
|
-
|
|
29
|
-
| Variant | Suggested resource allocation | Avg prompt throughput | Avg generation throughput |
|
|
30
|
-
|:----------:|:----------:|:----------:|:----------:|
|
|
31
|
-
| [`dbrx-instruct`](https://huggingface.co/databricks/dbrx-instruct) | 8x a40 (2 nodes, 4 a40/node) | 107 tokens/s | 904 tokens/s |
|
|
32
|
-
|
|
33
27
|
### [Google: Gemma 2](https://huggingface.co/collections/google/gemma-2-release-667d6600fd5220e7b967f315)
|
|
34
28
|
|
|
35
29
|
| Variant | Suggested resource allocation | Avg prompt throughput | Avg generation throughput |
|
|
@@ -104,12 +98,6 @@ More profiling metrics coming soon!
|
|
|
104
98
|
|:----------:|:----------:|:----------:|:----------:|
|
|
105
99
|
| [`Phi-3-medium-128k-instruct`](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) | 2x a40 | - tokens/s | - tokens/s |
|
|
106
100
|
|
|
107
|
-
### [Aaditya Ura: Llama3-OpenBioLLM](https://huggingface.co/aaditya/Llama3-OpenBioLLM-70B)
|
|
108
|
-
|
|
109
|
-
| Variant | Suggested resource allocation | Avg prompt throughput | Avg generation throughput |
|
|
110
|
-
|:----------:|:----------:|:----------:|:----------:|
|
|
111
|
-
| [`Llama3-OpenBioLLM-70B`](https://huggingface.co/aaditya/Llama3-OpenBioLLM-70B) | 4x a40 | - tokens/s | - tokens/s |
|
|
112
|
-
|
|
113
101
|
### [Nvidia: Llama-3.1-Nemotron](https://huggingface.co/collections/nvidia/llama-31-nemotron-70b-670e93cd366feea16abc13d8)
|
|
114
102
|
|
|
115
103
|
| Variant | Suggested resource allocation | Avg prompt throughput | Avg generation throughput |
|
|
@@ -162,6 +150,13 @@ More profiling metrics coming soon!
|
|
|
162
150
|
|
|
163
151
|
## Vision Language Models
|
|
164
152
|
|
|
153
|
+
### [allenai: Molmo](https://huggingface.co/collections/allenai/molmo-66f379e6fe3b8ef090a8ca19)
|
|
154
|
+
|
|
155
|
+
| Variant | Suggested resource allocation | Avg prompt throughput | Avg generation throughput |
|
|
156
|
+
|:----------:|:----------:|:----------:|:----------:|
|
|
157
|
+
| [`Molmo-7B-D-0924`](https://huggingface.co/allenai/Molmo-7B-D-0924) | 1x a40 | - tokens/s | - tokens/s |
|
|
158
|
+
|
|
159
|
+
|
|
165
160
|
### [LLaVa-1.5](https://huggingface.co/collections/llava-hf/llava-15-65f762d5b6941db5c2ba07e0)
|
|
166
161
|
|
|
167
162
|
| Variant | Suggested resource allocation | Avg prompt throughput | Avg generation throughput |
|
|
@@ -181,6 +176,7 @@ More profiling metrics coming soon!
|
|
|
181
176
|
| Variant | Suggested resource allocation | Avg prompt throughput | Avg generation throughput |
|
|
182
177
|
|:----------:|:----------:|:----------:|:----------:|
|
|
183
178
|
| [`Phi-3-vision-128k-instruct`](https://huggingface.co/microsoft/Phi-3-vision-128k-instruct) | 2x a40 | - tokens/s | - tokens/s |
|
|
179
|
+
| [`Phi-3.5-vision-instruct`](https://huggingface.co/microsoft/Phi-3.5-vision-instruct) | 2x a40 | - tokens/s | - tokens/s |
|
|
184
180
|
|
|
185
181
|
### [Meta: Llama 3.2](https://huggingface.co/collections/meta-llama/llama-32-66f448ffc8c32f949b04c8cf)
|
|
186
182
|
|
|
@@ -199,6 +195,27 @@ More profiling metrics coming soon!
|
|
|
199
195
|
|:----------:|:----------:|:----------:|:----------:|
|
|
200
196
|
| [`Pixtral-12B-2409`](https://huggingface.co/mistralai/Pixtral-12B-2409) | 1x a40 | - tokens/s | - tokens/s |
|
|
201
197
|
|
|
198
|
+
### [OpenGVLab: InternVL2.5](https://huggingface.co/collections/OpenGVLab/internvl25-673e1019b66e2218f68d7c1c)
|
|
199
|
+
|
|
200
|
+
| Variant | Suggested resource allocation | Avg prompt throughput | Avg generation throughput |
|
|
201
|
+
|:----------:|:----------:|:----------:|:----------:|
|
|
202
|
+
| [`InternVL2_5-8B`](https://huggingface.co/OpenGVLab/InternVL2_5-8B) | 1x a40 | - tokens/s | - tokens/s |
|
|
203
|
+
| [`InternVL2_5-26B`](https://huggingface.co/OpenGVLab/InternVL2_5-26B) | 2x a40 | - tokens/s | - tokens/s |
|
|
204
|
+
| [`InternVL2_5-38B`](https://huggingface.co/OpenGVLab/InternVL2_5-38B) | 4x a40 | - tokens/s | - tokens/s |
|
|
205
|
+
|
|
206
|
+
### [THUDM: GLM-4](https://huggingface.co/collections/THUDM/glm-4-665fcf188c414b03c2f7e3b7)
|
|
207
|
+
|
|
208
|
+
| Variant | Suggested resource allocation | Avg prompt throughput | Avg generation throughput |
|
|
209
|
+
|:----------:|:----------:|:----------:|:----------:|
|
|
210
|
+
| [`glm-4v-9b`](https://huggingface.co/THUDM/glm-4v-9b) | 1x a40 | - tokens/s | - tokens/s |
|
|
211
|
+
|
|
212
|
+
### [DeepSeek: DeepSeek-VL2](https://huggingface.co/collections/deepseek-ai/deepseek-vl2-675c22accc456d3beb4613ab)
|
|
213
|
+
| Variant | Suggested resource allocation | Avg prompt throughput | Avg generation throughput |
|
|
214
|
+
|:----------:|:----------:|:----------:|:----------:|
|
|
215
|
+
| [`deepseek-vl2`](https://huggingface.co/deepseek-ai/deepseek-vl2) | 2x a40 | - tokens/s | - tokens/s |
|
|
216
|
+
| [`deepseek-vl2-small`](https://huggingface.co/deepseek-ai/deepseek-vl2-small) | 1x a40 | - tokens/s | - tokens/s |
|
|
217
|
+
|
|
218
|
+
|
|
202
219
|
## Text Embedding Models
|
|
203
220
|
|
|
204
221
|
### [Liang Wang: e5](https://huggingface.co/intfloat)
|
|
@@ -225,3 +242,4 @@ More profiling metrics coming soon!
|
|
|
225
242
|
| Variant | Suggested resource allocation | Avg prompt throughput | Avg generation throughput |
|
|
226
243
|
|:----------:|:----------:|:----------:|:----------:|
|
|
227
244
|
| [`Qwen2.5-Math-RM-72B`](https://huggingface.co/Qwen/Qwen2.5-Math-RM-72B) | 4x a40 | - tokens/s | - tokens/s |
|
|
245
|
+
| [`Qwen2.5-Math-PRM-7B`](https://huggingface.co/Qwen/Qwen2.5-Math-PRM-7B) | 1x a40 | - tokens/s | - tokens/s |
|