vec-inf 0.4.1__py3-none-any.whl → 0.6.0__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -24,12 +24,6 @@ More profiling metrics coming soon!
24
24
  | [`CodeLlama-70b-hf`](https://huggingface.co/meta-llama/CodeLlama-70b-hf) | 4x a40 | - tokens/s | - tokens/s |
25
25
  | [`CodeLlama-70b-Instruct-hf`](https://huggingface.co/meta-llama/CodeLlama-70b-Instruct-hf) | 4x a40 | - tokens/s | - tokens/s |
26
26
 
27
- ### [Databricks: DBRX](https://huggingface.co/collections/databricks/dbrx-6601c0852a0cdd3c59f71962)
28
-
29
- | Variant | Suggested resource allocation | Avg prompt throughput | Avg generation throughput |
30
- |:----------:|:----------:|:----------:|:----------:|
31
- | [`dbrx-instruct`](https://huggingface.co/databricks/dbrx-instruct) | 8x a40 (2 nodes, 4 a40/node) | 107 tokens/s | 904 tokens/s |
32
-
33
27
  ### [Google: Gemma 2](https://huggingface.co/collections/google/gemma-2-release-667d6600fd5220e7b967f315)
34
28
 
35
29
  | Variant | Suggested resource allocation | Avg prompt throughput | Avg generation throughput |
@@ -104,12 +98,6 @@ More profiling metrics coming soon!
104
98
  |:----------:|:----------:|:----------:|:----------:|
105
99
  | [`Phi-3-medium-128k-instruct`](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) | 2x a40 | - tokens/s | - tokens/s |
106
100
 
107
- ### [Aaditya Ura: Llama3-OpenBioLLM](https://huggingface.co/aaditya/Llama3-OpenBioLLM-70B)
108
-
109
- | Variant | Suggested resource allocation | Avg prompt throughput | Avg generation throughput |
110
- |:----------:|:----------:|:----------:|:----------:|
111
- | [`Llama3-OpenBioLLM-70B`](https://huggingface.co/aaditya/Llama3-OpenBioLLM-70B) | 4x a40 | - tokens/s | - tokens/s |
112
-
113
101
  ### [Nvidia: Llama-3.1-Nemotron](https://huggingface.co/collections/nvidia/llama-31-nemotron-70b-670e93cd366feea16abc13d8)
114
102
 
115
103
  | Variant | Suggested resource allocation | Avg prompt throughput | Avg generation throughput |
@@ -162,6 +150,13 @@ More profiling metrics coming soon!
162
150
 
163
151
  ## Vision Language Models
164
152
 
153
+ ### [allenai: Molmo](https://huggingface.co/collections/allenai/molmo-66f379e6fe3b8ef090a8ca19)
154
+
155
+ | Variant | Suggested resource allocation | Avg prompt throughput | Avg generation throughput |
156
+ |:----------:|:----------:|:----------:|:----------:|
157
+ | [`Molmo-7B-D-0924`](https://huggingface.co/allenai/Molmo-7B-D-0924) | 1x a40 | - tokens/s | - tokens/s |
158
+
159
+
165
160
  ### [LLaVa-1.5](https://huggingface.co/collections/llava-hf/llava-15-65f762d5b6941db5c2ba07e0)
166
161
 
167
162
  | Variant | Suggested resource allocation | Avg prompt throughput | Avg generation throughput |
@@ -181,6 +176,7 @@ More profiling metrics coming soon!
181
176
  | Variant | Suggested resource allocation | Avg prompt throughput | Avg generation throughput |
182
177
  |:----------:|:----------:|:----------:|:----------:|
183
178
  | [`Phi-3-vision-128k-instruct`](https://huggingface.co/microsoft/Phi-3-vision-128k-instruct) | 2x a40 | - tokens/s | - tokens/s |
179
+ | [`Phi-3.5-vision-instruct`](https://huggingface.co/microsoft/Phi-3.5-vision-instruct) | 2x a40 | - tokens/s | - tokens/s |
184
180
 
185
181
  ### [Meta: Llama 3.2](https://huggingface.co/collections/meta-llama/llama-32-66f448ffc8c32f949b04c8cf)
186
182
 
@@ -199,6 +195,27 @@ More profiling metrics coming soon!
199
195
  |:----------:|:----------:|:----------:|:----------:|
200
196
  | [`Pixtral-12B-2409`](https://huggingface.co/mistralai/Pixtral-12B-2409) | 1x a40 | - tokens/s | - tokens/s |
201
197
 
198
+ ### [OpenGVLab: InternVL2.5](https://huggingface.co/collections/OpenGVLab/internvl25-673e1019b66e2218f68d7c1c)
199
+
200
+ | Variant | Suggested resource allocation | Avg prompt throughput | Avg generation throughput |
201
+ |:----------:|:----------:|:----------:|:----------:|
202
+ | [`InternVL2_5-8B`](https://huggingface.co/OpenGVLab/InternVL2_5-8B) | 1x a40 | - tokens/s | - tokens/s |
203
+ | [`InternVL2_5-26B`](https://huggingface.co/OpenGVLab/InternVL2_5-26B) | 2x a40 | - tokens/s | - tokens/s |
204
+ | [`InternVL2_5-38B`](https://huggingface.co/OpenGVLab/InternVL2_5-38B) | 4x a40 | - tokens/s | - tokens/s |
205
+
206
+ ### [THUDM: GLM-4](https://huggingface.co/collections/THUDM/glm-4-665fcf188c414b03c2f7e3b7)
207
+
208
+ | Variant | Suggested resource allocation | Avg prompt throughput | Avg generation throughput |
209
+ |:----------:|:----------:|:----------:|:----------:|
210
+ | [`glm-4v-9b`](https://huggingface.co/THUDM/glm-4v-9b) | 1x a40 | - tokens/s | - tokens/s |
211
+
212
+ ### [DeepSeek: DeepSeek-VL2](https://huggingface.co/collections/deepseek-ai/deepseek-vl2-675c22accc456d3beb4613ab)
213
+ | Variant | Suggested resource allocation | Avg prompt throughput | Avg generation throughput |
214
+ |:----------:|:----------:|:----------:|:----------:|
215
+ | [`deepseek-vl2`](https://huggingface.co/deepseek-ai/deepseek-vl2) | 2x a40 | - tokens/s | - tokens/s |
216
+ | [`deepseek-vl2-small`](https://huggingface.co/deepseek-ai/deepseek-vl2-small) | 1x a40 | - tokens/s | - tokens/s |
217
+
218
+
202
219
  ## Text Embedding Models
203
220
 
204
221
  ### [Liang Wang: e5](https://huggingface.co/intfloat)
@@ -225,3 +242,4 @@ More profiling metrics coming soon!
225
242
  | Variant | Suggested resource allocation | Avg prompt throughput | Avg generation throughput |
226
243
  |:----------:|:----------:|:----------:|:----------:|
227
244
  | [`Qwen2.5-Math-RM-72B`](https://huggingface.co/Qwen/Qwen2.5-Math-RM-72B) | 4x a40 | - tokens/s | - tokens/s |
245
+ | [`Qwen2.5-Math-PRM-7B`](https://huggingface.co/Qwen/Qwen2.5-Math-PRM-7B) | 1x a40 | - tokens/s | - tokens/s |