vec-inf 0.4.1__py3-none-any.whl → 0.6.0__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,193 @@
1
+ Metadata-Version: 2.4
2
+ Name: vec-inf
3
+ Version: 0.6.0
4
+ Summary: Efficient LLM inference on Slurm clusters using vLLM.
5
+ Author-email: Marshall Wang <marshall.wang@vectorinstitute.ai>
6
+ License-Expression: MIT
7
+ License-File: LICENSE
8
+ Requires-Python: >=3.10
9
+ Requires-Dist: click>=8.1.0
10
+ Requires-Dist: pydantic>=2.10.6
11
+ Requires-Dist: pyyaml>=6.0.2
12
+ Requires-Dist: requests>=2.31.0
13
+ Requires-Dist: rich>=13.7.0
14
+ Provides-Extra: dev
15
+ Requires-Dist: cupy-cuda12x==12.1.0; extra == 'dev'
16
+ Requires-Dist: ray>=2.40.0; extra == 'dev'
17
+ Requires-Dist: torch>=2.5.1; extra == 'dev'
18
+ Requires-Dist: vllm-nccl-cu12<2.19,>=2.18; extra == 'dev'
19
+ Requires-Dist: vllm>=0.7.3; extra == 'dev'
20
+ Requires-Dist: xgrammar>=0.1.11; extra == 'dev'
21
+ Description-Content-Type: text/markdown
22
+
23
+ # Vector Inference: Easy inference on Slurm clusters
24
+
25
+ ----------------------------------------------------
26
+
27
+ [![PyPI](https://img.shields.io/pypi/v/vec-inf)](https://pypi.org/project/vec-inf)
28
+ [![downloads](https://img.shields.io/pypi/dm/vec-inf)](https://pypistats.org/packages/vec-inf)
29
+ [![code checks](https://github.com/VectorInstitute/vector-inference/actions/workflows/code_checks.yml/badge.svg)](https://github.com/VectorInstitute/vector-inference/actions/workflows/code_checks.yml)
30
+ [![docs](https://github.com/VectorInstitute/vector-inference/actions/workflows/docs.yml/badge.svg)](https://github.com/VectorInstitute/vector-inference/actions/workflows/docs.yml)
31
+ [![codecov](https://codecov.io/github/VectorInstitute/vector-inference/branch/main/graph/badge.svg?token=NI88QSIGAC)](https://app.codecov.io/github/VectorInstitute/vector-inference/tree/main)
32
+ ![GitHub License](https://img.shields.io/github/license/VectorInstitute/vector-inference)
33
+
34
+ This repository provides an easy-to-use solution to run inference servers on [Slurm](https://slurm.schedmd.com/overview.html)-managed computing clusters using [vLLM](https://docs.vllm.ai/en/latest/). **All scripts in this repository runs natively on the Vector Institute cluster environment**. To adapt to other environments, update the environment variables in [`vec_inf/client/slurm_vars.py`](vec_inf/client/slurm_vars.py), and the model config for cached model weights in [`vec_inf/config/models.yaml`](vec_inf/config/models.yaml) accordingly.
35
+
36
+ ## Installation
37
+ If you are using the Vector cluster environment, and you don't need any customization to the inference server environment, run the following to install package:
38
+
39
+ ```bash
40
+ pip install vec-inf
41
+ ```
42
+ Otherwise, we recommend using the provided [`Dockerfile`](Dockerfile) to set up your own environment with the package
43
+
44
+ ## Usage
45
+
46
+ Vector Inference provides 2 user interfaces, a CLI and an API
47
+
48
+ ### CLI
49
+
50
+ The `launch` command allows users to deploy a model as a slurm job. If the job successfully launches, a URL endpoint is exposed for the user to send requests for inference.
51
+
52
+ We will use the Llama 3.1 model as example, to launch an OpenAI compatible inference server for Meta-Llama-3.1-8B-Instruct, run:
53
+
54
+ ```bash
55
+ vec-inf launch Meta-Llama-3.1-8B-Instruct
56
+ ```
57
+ You should see an output like the following:
58
+
59
+ <img width="600" alt="launch_image" src="https://github.com/user-attachments/assets/a72a99fd-4bf2-408e-8850-359761d96c4f">
60
+
61
+
62
+ #### Overrides
63
+
64
+ Models that are already supported by `vec-inf` would be launched using the cached configuration (set in [slurm_vars.py](vec_inf/client/slurm_vars.py)) or [default configuration](vec_inf/config/models.yaml). You can override these values by providing additional parameters. Use `vec-inf launch --help` to see the full list of parameters that can be
65
+ overriden. For example, if `qos` is to be overriden:
66
+
67
+ ```bash
68
+ vec-inf launch Meta-Llama-3.1-8B-Instruct --qos <new_qos>
69
+ ```
70
+
71
+ To overwrite default vLLM engine arguments, you can specify the engine arguments in a comma separated string:
72
+
73
+ ```bash
74
+ vec-inf launch Meta-Llama-3.1-8B-Instruct --vllm-args '--max-model-len=65536,--compilation-config=3'
75
+ ```
76
+
77
+ For the full list of vLLM engine arguments, you can find them [here](https://docs.vllm.ai/en/stable/serving/engine_args.html), make sure you select the correct vLLM version.
78
+
79
+ #### Custom models
80
+
81
+ You can also launch your own custom model as long as the model architecture is [supported by vLLM](https://docs.vllm.ai/en/stable/models/supported_models.html), and make sure to follow the instructions below:
82
+ * Your model weights directory naming convention should follow `$MODEL_FAMILY-$MODEL_VARIANT` ($MODEL_VARIANT is OPTIONAL).
83
+ * Your model weights directory should contain HuggingFace format weights.
84
+ * You should specify your model configuration by:
85
+ * Creating a custom configuration file for your model and specify its path via setting the environment variable `VEC_INF_CONFIG`. Check the [default parameters](vec_inf/config/models.yaml) file for the format of the config file. All the parameters for the model should be specified in that config file.
86
+ * Using launch command options to specify your model setup.
87
+ * For other model launch parameters you can reference the default values for similar models using the [`list` command ](#list-command).
88
+
89
+ Here is an example to deploy a custom [Qwen2.5-7B-Instruct-1M](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct-1M) model which is not
90
+ supported in the default list of models using a user custom config. In this case, the model weights are assumed to be downloaded to
91
+ a `model-weights` directory inside the user's home directory. The weights directory of the model follows the naming convention so it
92
+ would be named `Qwen2.5-7B-Instruct-1M`. The following yaml file would need to be created, lets say it is named `/h/<username>/my-model-config.yaml`.
93
+
94
+ ```yaml
95
+ models:
96
+ Qwen2.5-7B-Instruct-1M:
97
+ model_family: Qwen2.5
98
+ model_variant: 7B-Instruct-1M
99
+ model_type: LLM
100
+ gpus_per_node: 1
101
+ num_nodes: 1
102
+ vocab_size: 152064
103
+ qos: m2
104
+ time: 08:00:00
105
+ partition: a40
106
+ model_weights_parent_dir: /h/<username>/model-weights
107
+ vllm_args:
108
+ --max-model-len: 1010000
109
+ --max-num-seqs: 256
110
+ --compilation-confi: 3
111
+ ```
112
+
113
+ You would then set the `VEC_INF_CONFIG` path using:
114
+
115
+ ```bash
116
+ export VEC_INF_CONFIG=/h/<username>/my-model-config.yaml
117
+ ```
118
+
119
+ Note that there are other parameters that can also be added to the config but not shown in this example, check the [`ModelConfig`](vec_inf/client/config.py) for details.
120
+
121
+ #### Other commands
122
+
123
+ * `status`: Check the model status by providing its Slurm job ID, `--json-mode` supported.
124
+ * `metrics`: Streams performance metrics to the console.
125
+ * `shutdown`: Shutdown a model by providing its Slurm job ID.
126
+ * `list`: List all available model names, or view the default/cached configuration of a specific model, `--json-mode` supported.
127
+
128
+ For more details on the usage of these commands, refer to the [User Guide](https://vectorinstitute.github.io/vector-inference/user_guide/)
129
+
130
+ ### API
131
+
132
+ Example:
133
+
134
+ ```python
135
+ >>> from vec_inf.api import VecInfClient
136
+ >>> client = VecInfClient()
137
+ >>> response = client.launch_model("Meta-Llama-3.1-8B-Instruct")
138
+ >>> job_id = response.slurm_job_id
139
+ >>> status = client.get_status(job_id)
140
+ >>> if status.status == ModelStatus.READY:
141
+ ... print(f"Model is ready at {status.base_url}")
142
+ >>> client.shutdown_model(job_id)
143
+ ```
144
+
145
+ For details on the usage of the API, refer to the [API Reference](https://vectorinstitute.github.io/vector-inference/api/)
146
+
147
+ ## Check Job Configuration
148
+
149
+ With every model launch, a Slurm script will be generated dynamically based on the job and model configuration. Once the Slurm job is queued, the generated Slurm script will be moved to the log directory for reproducibility, located at `$log_dir/$model_family/$model_name.$slurm_job_id/$model_name.$slurm_job_id.slurm`. In the same directory you can also find a JSON file with the same name that captures the launch configuration, and will have an entry of server URL once the server is ready.
150
+
151
+ ## Send inference requests
152
+
153
+ Once the inference server is ready, you can start sending in inference requests. We provide example scripts for sending inference requests in [`examples`](examples) folder. Make sure to update the model server URL and the model weights location in the scripts. For example, you can run `python examples/inference/llm/chat_completions.py`, and you should expect to see an output like the following:
154
+
155
+ ```json
156
+ {
157
+ "id":"chatcmpl-387c2579231948ffaf66cdda5439d3dc",
158
+ "choices": [
159
+ {
160
+ "finish_reason":"stop",
161
+ "index":0,
162
+ "logprobs":null,
163
+ "message": {
164
+ "content":"Arrr, I be Captain Chatbeard, the scurviest chatbot on the seven seas! Ye be wantin' to know me identity, eh? Well, matey, I be a swashbucklin' AI, here to provide ye with answers and swappin' tales, savvy?",
165
+ "role":"assistant",
166
+ "function_call":null,
167
+ "tool_calls":[],
168
+ "reasoning_content":null
169
+ },
170
+ "stop_reason":null
171
+ }
172
+ ],
173
+ "created":1742496683,
174
+ "model":"Meta-Llama-3.1-8B-Instruct",
175
+ "object":"chat.completion",
176
+ "system_fingerprint":null,
177
+ "usage": {
178
+ "completion_tokens":66,
179
+ "prompt_tokens":32,
180
+ "total_tokens":98,
181
+ "prompt_tokens_details":null
182
+ },
183
+ "prompt_logprobs":null
184
+ }
185
+ ```
186
+ **NOTE**: For multimodal models, currently only `ChatCompletion` is available, and only one image can be provided for each prompt.
187
+
188
+ ## SSH tunnel from your local device
189
+ If you want to run inference from your local device, you can open a SSH tunnel to your cluster environment like the following:
190
+ ```bash
191
+ ssh -L 8081:172.17.8.29:8081 username@v.vectorinstitute.ai -N
192
+ ```
193
+ Where the last number in the URL is the GPU number (gpu029 in this case). The example provided above is for the vector cluster, change the variables accordingly for your environment
@@ -0,0 +1,25 @@
1
+ vec_inf/README.md,sha256=3ocJHfV3kRftXFUCdHw3B-p4QQlXuNqkHnjPPNkCgfM,543
2
+ vec_inf/__init__.py,sha256=bHwSIz9lebYuxIemni-lP0h3gwJHVbJnwExQKGJWw_Q,23
3
+ vec_inf/find_port.sh,sha256=bGQ6LYSFVSsfDIGatrSg5YvddbZfaPL0R-Bjo4KYD6I,1088
4
+ vec_inf/cli/__init__.py,sha256=5XIvGQCOnaGl73XMkwetjC-Ul3xuXGrWDXdYJ3aUzvU,27
5
+ vec_inf/cli/_cli.py,sha256=bqyLvFK4Vqoh-wAaUPg50_qYbrW-c9Cl_-YySgVk5_M,9871
6
+ vec_inf/cli/_helper.py,sha256=i1QvJeIT3z7me6bv2Vot5c3NY555Dgo3q8iRlxhOlZ4,13047
7
+ vec_inf/cli/_utils.py,sha256=23vSbmvNOWY1-W1aOAwYqNDkDDmx-5UVlCiXAtxUZ8A,1057
8
+ vec_inf/cli/_vars.py,sha256=V6DrJs_BuUa4yNcbBSSnMwpcyXwEBsizy3D0ubIg2fA,777
9
+ vec_inf/client/__init__.py,sha256=OLlUJ4kL1R-Kh-nXNbvKlAZ3mtHcnozHprVufkVCNWk,739
10
+ vec_inf/client/_client_vars.py,sha256=eVQjpuASd8beBjAeAbQnMRZM8nCLZMHx-x62BcXVnYA,7163
11
+ vec_inf/client/_exceptions.py,sha256=94Nx_5k1SriJNXzbdnwyXFZolyMutydU08Gsikawzzo,749
12
+ vec_inf/client/_helper.py,sha256=76OTCroNR5e3e7T2qSV_tkexDaUQsJrs8bFiMJ5NaxU,22718
13
+ vec_inf/client/_slurm_script_generator.py,sha256=jFgr2Pu7b_Uqli3DBvxUr9MI1-3TA6wwxg07O2rTwPs,6299
14
+ vec_inf/client/_utils.py,sha256=1dB2O1neEhZNk6MJbBybLQm42vsmEevA2TI0F_kGi0o,8796
15
+ vec_inf/client/api.py,sha256=TYn4lP5Ene8MEuXWYo6ZbGYw9aPnaMlT32SH7jLCifM,9605
16
+ vec_inf/client/config.py,sha256=kOhxoepsvArxRFNlwq1sLDHsxDewLwxRV1VwsL0MrGU,4683
17
+ vec_inf/client/models.py,sha256=JZDUMBX3XKOClaq_yJUpDUSgiDy42nT5Dq5bxQWiO2I,5778
18
+ vec_inf/client/slurm_vars.py,sha256=lroK41L4gEVVZNxxE3bEpbKsdMwnH79-7iCKd4zWEa4,1069
19
+ vec_inf/config/README.md,sha256=OlgnD_Ojei_xLkNyS7dGvYMFUzQFqjVRVw0V-QMk_3g,17863
20
+ vec_inf/config/models.yaml,sha256=PR91vOzINVOkAco9S-_VIXQ5Un6ekeoWz2Pj8DMR8LQ,29630
21
+ vec_inf-0.6.0.dist-info/METADATA,sha256=-xadTsrAR3tOfPyxTdGB9DLuhWMu_mnp_JF5Aa-1-08,9755
22
+ vec_inf-0.6.0.dist-info/WHEEL,sha256=qtCwoSJWgHk21S1Kb4ihdzI2rlJ1ZKaIurTj_ngOhyQ,87
23
+ vec_inf-0.6.0.dist-info/entry_points.txt,sha256=uNRXjCuJSR2nveEqD3IeMznI9oVI9YLZh5a24cZg6B0,49
24
+ vec_inf-0.6.0.dist-info/licenses/LICENSE,sha256=mq8zeqpvVSF1EsxmydeXcokt8XnEIfSofYn66S2-cJI,1073
25
+ vec_inf-0.6.0.dist-info/RECORD,,
vec_inf/launch_server.sh DELETED
@@ -1,145 +0,0 @@
1
- #!/bin/bash
2
-
3
- # ================================= Read Named Args ======================================
4
-
5
- while [[ "$#" -gt 0 ]]; do
6
- case $1 in
7
- --model-family) model_family="$2"; shift ;;
8
- --model-variant) model_variant="$2"; shift ;;
9
- --model-type) model_type="$2"; shift ;;
10
- --partition) partition="$2"; shift ;;
11
- --qos) qos="$2"; shift ;;
12
- --time) walltime="$2"; shift ;;
13
- --num-nodes) num_nodes="$2"; shift ;;
14
- --num-gpus) num_gpus="$2"; shift ;;
15
- --max-model-len) max_model_len="$2"; shift ;;
16
- --max-num-seqs) max_num_seqs="$2"; shift ;;
17
- --vocab-size) vocab_size="$2"; shift ;;
18
- --data-type) data_type="$2"; shift ;;
19
- --venv) venv="$2"; shift ;;
20
- --log-dir) log_dir="$2"; shift ;;
21
- --model-weights-parent-dir) model_weights_parent_dir="$2"; shift ;;
22
- --pipeline-parallelism) pipeline_parallelism="$2"; shift ;;
23
- --enforce-eager) enforce_eager="$2"; shift ;;
24
- *) echo "Unknown parameter passed: $1"; exit 1 ;;
25
- esac
26
- shift
27
- done
28
-
29
- required_vars=(model_family model_variant model_type partition qos walltime num_nodes num_gpus max_model_len vocab_size data_type venv log_dir model_weights_parent_dir)
30
-
31
- for var in "$required_vars[@]"; do
32
- if [ -z "$!var" ]; then
33
- echo "Error: Missing required --$var argument."
34
- exit 1
35
- fi
36
- done
37
-
38
- export MODEL_FAMILY=$model_family
39
- export MODEL_VARIANT=$model_variant
40
- export MODEL_TYPE=$model_type
41
- export JOB_PARTITION=$partition
42
- export QOS=$qos
43
- export WALLTIME=$walltime
44
- export NUM_NODES=$num_nodes
45
- export NUM_GPUS=$num_gpus
46
- export VLLM_MAX_MODEL_LEN=$max_model_len
47
- export VLLM_MAX_LOGPROBS=$vocab_size
48
- export VLLM_DATA_TYPE=$data_type
49
- export VENV_BASE=$venv
50
- export LOG_DIR=$log_dir
51
- export MODEL_WEIGHTS_PARENT_DIR=$model_weights_parent_dir
52
-
53
- if [[ "$model_type" == "LLM" || "$model_type" == "VLM" ]]; then
54
- export VLLM_TASK="generate"
55
- elif [ "$model_type" == "Reward_Modeling" ]; then
56
- export VLLM_TASK="reward"
57
- elif [ "$model_type" == "Text_Embedding" ]; then
58
- export VLLM_TASK="embed"
59
- else
60
- echo "Error: Unknown model_type: $model_type"
61
- exit 1
62
- fi
63
-
64
- if [ -n "$max_num_seqs" ]; then
65
- export VLLM_MAX_NUM_SEQS=$max_num_seqs
66
- else
67
- export VLLM_MAX_NUM_SEQS=256
68
- fi
69
-
70
- if [ -n "$pipeline_parallelism" ]; then
71
- export PIPELINE_PARALLELISM=$pipeline_parallelism
72
- else
73
- export PIPELINE_PARALLELISM="False"
74
- fi
75
-
76
- if [ -n "$enforce_eager" ]; then
77
- export ENFORCE_EAGER=$enforce_eager
78
- else
79
- export ENFORCE_EAGER="False"
80
- fi
81
-
82
- # ================================= Set default environment variables ======================================
83
- # Slurm job configuration
84
- export JOB_NAME="$MODEL_FAMILY-$MODEL_VARIANT"
85
- if [ "$JOB_NAME" == "DeepSeek-R1-None" ]; then
86
- export JOB_NAME=$MODEL_FAMILY
87
- fi
88
-
89
- if [ "$LOG_DIR" = "default" ]; then
90
- export LOG_DIR="$HOME/.vec-inf-logs/$MODEL_FAMILY"
91
- fi
92
- mkdir -p $LOG_DIR
93
-
94
- # Model and entrypoint configuration. API Server URL (host, port) are set automatically based on the
95
- # SLURM job
96
- export SRC_DIR="$(dirname "$0")"
97
- export MODEL_DIR="${SRC_DIR}/models/${MODEL_FAMILY}"
98
-
99
- # Variables specific to your working environment, below are examples for the Vector cluster
100
- export VLLM_MODEL_WEIGHTS="${MODEL_WEIGHTS_PARENT_DIR}/${JOB_NAME}"
101
- export LD_LIBRARY_PATH="/scratch/ssd001/pkgs/cudnn-11.7-v8.5.0.96/lib/:/scratch/ssd001/pkgs/cuda-11.7/targets/x86_64-linux/lib/"
102
-
103
-
104
- # ================================ Validate Inputs & Launch Server =================================
105
-
106
- # Set data type to fp16 instead of bf16 for non-Ampere GPUs
107
- fp16_partitions="t4v1 t4v2"
108
-
109
- # choose from 'auto', 'half', 'float16', 'bfloat16', 'float', 'float32'
110
- if [[ $fp16_partitions =~ $JOB_PARTITION ]]; then
111
- export VLLM_DATA_TYPE="float16"
112
- echo "Data type set to due to non-Ampere GPUs used: $VLLM_DATA_TYPE"
113
- fi
114
-
115
- echo Job Name: $JOB_NAME
116
- echo Partition: $JOB_PARTITION
117
- echo Num Nodes: $NUM_NODES
118
- echo GPUs per Node: $NUM_GPUS
119
- echo QOS: $QOS
120
- echo Walltime: $WALLTIME
121
- echo Model Type: $MODEL_TYPE
122
- echo Task: $VLLM_TASK
123
- echo Data Type: $VLLM_DATA_TYPE
124
- echo Max Model Length: $VLLM_MAX_MODEL_LEN
125
- echo Max Num Seqs: $VLLM_MAX_NUM_SEQS
126
- echo Vocabulary Size: $VLLM_MAX_LOGPROBS
127
- echo Pipeline Parallelism: $PIPELINE_PARALLELISM
128
- echo Enforce Eager: $ENFORCE_EAGER
129
- echo Log Directory: $LOG_DIR
130
- echo Model Weights Parent Directory: $MODEL_WEIGHTS_PARENT_DIR
131
-
132
- is_special=""
133
- if [ "$NUM_NODES" -gt 1 ]; then
134
- is_special="multinode_"
135
- fi
136
-
137
- sbatch --job-name $JOB_NAME \
138
- --partition $JOB_PARTITION \
139
- --nodes $NUM_NODES \
140
- --gres gpu:$NUM_GPUS \
141
- --qos $QOS \
142
- --time $WALLTIME \
143
- --output $LOG_DIR/$JOB_NAME.%j.out \
144
- --error $LOG_DIR/$JOB_NAME.%j.err \
145
- $SRC_DIR/${is_special}vllm.slurm
vec_inf/models/models.csv DELETED
@@ -1,85 +0,0 @@
1
- model_name,model_family,model_variant,model_type,num_gpus,num_nodes,vocab_size,max_model_len,max_num_seqs,pipeline_parallelism,enforce_eager,qos,time,partition,data_type,venv,log_dir,model_weights_parent_dir
2
- c4ai-command-r-plus,c4ai-command-r,plus,LLM,4,2,256000,8192,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
3
- c4ai-command-r-plus-08-2024,c4ai-command-r,plus-08-2024,LLM,4,2,256000,65536,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
4
- c4ai-command-r-08-2024,c4ai-command-r,08-2024,LLM,2,1,256000,32768,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
5
- CodeLlama-7b-hf,CodeLlama,7b-hf,LLM,1,1,32000,16384,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
6
- CodeLlama-7b-Instruct-hf,CodeLlama,7b-Instruct-hf,LLM,1,1,32000,16384,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
7
- CodeLlama-13b-hf,CodeLlama,13b-hf,LLM,1,1,32000,16384,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
8
- CodeLlama-13b-Instruct-hf,CodeLlama,13b-Instruct-hf,LLM,1,1,32000,16384,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
9
- CodeLlama-34b-hf,CodeLlama,34b-hf,LLM,2,1,32000,16384,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
10
- CodeLlama-34b-Instruct-hf,CodeLlama,34b-Instruct-hf,LLM,2,1,32000,16384,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
11
- CodeLlama-70b-hf,CodeLlama,70b-hf,LLM,4,1,32000,4096,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
12
- CodeLlama-70b-Instruct-hf,CodeLlama,70b-Instruct-hf,LLM,4,1,32000,4096,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
13
- dbrx-instruct,dbrx,instruct,LLM,4,2,100352,32000,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
14
- gemma-2-9b,gemma-2,9b,LLM,1,1,256000,4096,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
15
- gemma-2-9b-it,gemma-2,9b-it,LLM,1,1,256000,4096,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
16
- gemma-2-27b,gemma-2,27b,LLM,2,1,256000,4096,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
17
- gemma-2-27b-it,gemma-2,27b-it,LLM,2,1,256000,4096,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
18
- Llama-2-7b-hf,Llama-2,7b-hf,LLM,1,1,32000,4096,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
19
- Llama-2-7b-chat-hf,Llama-2,7b-chat-hf,LLM,1,1,32000,4096,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
20
- Llama-2-13b-hf,Llama-2,13b-hf,LLM,1,1,32000,4096,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
21
- Llama-2-13b-chat-hf,Llama-2,13b-chat-hf,LLM,1,1,32000,4096,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
22
- Llama-2-70b-hf,Llama-2,70b-hf,LLM,4,1,32000,4096,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
23
- Llama-2-70b-chat-hf,Llama-2,70b-chat-hf,LLM,4,1,32000,4096,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
24
- llava-1.5-7b-hf,llava-1.5,7b-hf,VLM,1,1,32000,4096,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
25
- llava-1.5-13b-hf,llava-1.5,13b-hf,VLM,1,1,32000,4096,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
26
- llava-v1.6-mistral-7b-hf,llava-v1.6,mistral-7b-hf,VLM,1,1,32064,32768,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
27
- llava-v1.6-34b-hf,llava-v1.6,34b-hf,VLM,2,1,64064,4096,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
28
- Meta-Llama-3-8B,Meta-Llama-3,8B,LLM,1,1,128256,8192,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
29
- Meta-Llama-3-8B-Instruct,Meta-Llama-3,8B-Instruct,LLM,1,1,128256,8192,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
30
- Meta-Llama-3-70B,Meta-Llama-3,70B,LLM,4,1,128256,8192,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
31
- Meta-Llama-3-70B-Instruct,Meta-Llama-3,70B-Instruct,LLM,4,1,128256,8192,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
32
- Meta-Llama-3.1-8B,Meta-Llama-3.1,8B,LLM,1,1,128256,131072,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
33
- Meta-Llama-3.1-8B-Instruct,Meta-Llama-3.1,8B-Instruct,LLM,1,1,128256,131072,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
34
- Meta-Llama-3.1-70B,Meta-Llama-3.1,70B,LLM,4,1,128256,65536,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
35
- Meta-Llama-3.1-70B-Instruct,Meta-Llama-3.1,70B-Instruct,LLM,4,1,128256,65536,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
36
- Meta-Llama-3.1-405B-Instruct,Meta-Llama-3.1,405B-Instruct,LLM,4,8,128256,16384,256,true,false,m4,02:00:00,a40,auto,singularity,default,/model-weights
37
- Mistral-7B-v0.1,Mistral,7B-v0.1,LLM,1,1,32000,32768,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
38
- Mistral-7B-Instruct-v0.1,Mistral,7B-Instruct-v0.1,LLM,1,1,32000,32768,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
39
- Mistral-7B-Instruct-v0.2,Mistral,7B-Instruct-v0.2,LLM,1,1,32000,32768,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
40
- Mistral-7B-v0.3,Mistral,7B-v0.3,LLM,1,1,32768,32768,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
41
- Mistral-7B-Instruct-v0.3,Mistral,7B-Instruct-v0.3,LLM,1,1,32768,32768,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
42
- Mistral-Large-Instruct-2407,Mistral,Large-Instruct-2407,LLM,4,2,32768,32768,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
43
- Mistral-Large-Instruct-2411,Mistral,Large-Instruct-2411,LLM,4,2,32768,32768,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
44
- Mixtral-8x7B-Instruct-v0.1,Mixtral,8x7B-Instruct-v0.1,LLM,4,1,32000,32768,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
45
- Mixtral-8x22B-v0.1,Mixtral,8x22B-v0.1,LLM,4,2,32768,65536,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
46
- Mixtral-8x22B-Instruct-v0.1,Mixtral,8x22B-Instruct-v0.1,LLM,4,2,32768,65536,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
47
- Phi-3-medium-128k-instruct,Phi-3,medium-128k-instruct,LLM,2,1,32064,131072,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
48
- Phi-3-vision-128k-instruct,Phi-3,vision-128k-instruct,VLM,2,1,32064,65536,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
49
- Llama3-OpenBioLLM-70B,Llama3-OpenBioLLM,70B,LLM,4,1,128256,8192,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
50
- Llama-3.1-Nemotron-70B-Instruct-HF,Llama-3.1-Nemotron,70B-Instruct-HF,LLM,4,1,128256,65536,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
51
- Llama-3.2-1B,Llama-3.2,1B,LLM,1,1,128256,131072,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
52
- Llama-3.2-1B-Instruct,Llama-3.2,1B-Instruct,LLM,1,1,128256,131072,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
53
- Llama-3.2-3B,Llama-3.2,3B,LLM,1,1,128256,131072,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
54
- Llama-3.2-3B-Instruct,Llama-3.2,3B-Instruct,LLM,1,1,128256,131072,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
55
- Llama-3.2-11B-Vision,Llama-3.2,11B-Vision,VLM,2,1,128256,4096,64,false,true,m2,08:00:00,a40,auto,singularity,default,/model-weights
56
- Llama-3.2-11B-Vision-Instruct,Llama-3.2,11B-Vision-Instruct,VLM,2,1,128256,4096,64,false,true,m2,08:00:00,a40,auto,singularity,default,/model-weights
57
- Llama-3.2-90B-Vision,Llama-3.2,90B-Vision,VLM,4,2,128256,4096,32,false,true,m2,08:00:00,a40,auto,singularity,default,/model-weights
58
- Llama-3.2-90B-Vision-Instruct,Llama-3.2,90B-Vision-Instruct,VLM,4,2,128256,4096,32,false,true,m2,08:00:00,a40,auto,singularity,default,/model-weights
59
- Qwen2.5-0.5B-Instruct,Qwen2.5,0.5B-Instruct,LLM,1,1,152064,32768,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
60
- Qwen2.5-1.5B-Instruct,Qwen2.5,1.5B-Instruct,LLM,1,1,152064,32768,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
61
- Qwen2.5-3B-Instruct,Qwen2.5,3B-Instruct,LLM,1,1,152064,32768,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
62
- Qwen2.5-7B-Instruct,Qwen2.5,7B-Instruct,LLM,1,1,152064,32768,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
63
- Qwen2.5-14B-Instruct,Qwen2.5,14B-Instruct,LLM,1,1,152064,32768,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
64
- Qwen2.5-32B-Instruct,Qwen2.5,32B-Instruct,LLM,2,1,152064,32768,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
65
- Qwen2.5-72B-Instruct,Qwen2.5,72B-Instruct,LLM,4,1,152064,16384,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
66
- Qwen2.5-Math-1.5B-Instruct,Qwen2.5,Math-1.5B-Instruct,LLM,1,1,152064,32768,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
67
- Qwen2.5-Math-7B-Instruct,Qwen2.5,Math-7B-Instruct,LLM,1,1,152064,32768,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
68
- Qwen2.5-Math-72B-Instruct,Qwen2.5,Math-72B-Instruct,LLM,4,1,152064,16384,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
69
- Qwen2.5-Coder-7B-Instruct,Qwen2.5,Coder-7B-Instruct,LLM,1,1,152064,32768,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
70
- Qwen2.5-Math-RM-72B,Qwen2.5,Math-RM-72B,Reward Modeling,4,1,152064,4096,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
71
- QwQ-32B-Preview,QwQ,32B-Preview,LLM,2,1,152064,32768,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
72
- Pixtral-12B-2409,Pixtral,12B-2409,VLM,1,1,131072,8192,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
73
- e5-mistral-7b-instruct,e5,mistral-7b-instruct,Text Embedding,1,1,32000,4096,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
74
- bge-base-en-v1.5,bge,base-en-v1.5,Text Embedding,1,1,30522,512,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
75
- all-MiniLM-L6-v2,all-MiniLM,L6-v2,Text Embedding,1,1,30522,512,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
76
- Llama-3.3-70B-Instruct,Llama-3.3,70B-Instruct,LLM,4,1,128256,65536,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
77
- InternVL2_5-26B,InternVL2_5,26B,VLM,2,1,92553,32768,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
78
- InternVL2_5-38B,InternVL2_5,38B,VLM,4,1,92553,32768,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
79
- Aya-Expanse-32B,Aya-Expanse,32B,LLM,2,1,256000,8192,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
80
- DeepSeek-R1-Distill-Llama-70B,DeepSeek-R1,Distill-Llama-70B,LLM,4,1,128256,65536,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
81
- DeepSeek-R1-Distill-Llama-8B,DeepSeek-R1,Distill-Llama-8B,LLM,1,1,128256,131072,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
82
- DeepSeek-R1-Distill-Qwen-32B,DeepSeek-R1,Distill-Qwen-32B,LLM,4,1,152064,131072,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
83
- DeepSeek-R1-Distill-Qwen-14B,DeepSeek-R1,Distill-Qwen-14B,LLM,2,1,152064,131072,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
84
- DeepSeek-R1-Distill-Qwen-7B,DeepSeek-R1,Distill-Qwen-7B,LLM,1,1,152064,131072,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
85
- DeepSeek-R1-Distill-Qwen-1.5B,DeepSeek-R1,Distill-Qwen-1.5B,LLM,1,1,152064,131072,256,true,false,m2,08:00:00,a40,auto,singularity,default,/model-weights
@@ -1,124 +0,0 @@
1
- #!/bin/bash
2
- #SBATCH --cpus-per-task=16
3
- #SBATCH --mem=64G
4
- #SBATCH --exclusive
5
- #SBATCH --tasks-per-node=1
6
-
7
- # Load CUDA, change to the cuda version on your environment if different
8
- source /opt/lmod/lmod/init/profile
9
- module load cuda-12.3
10
- nvidia-smi
11
-
12
- source ${SRC_DIR}/find_port.sh
13
-
14
- if [ "$VENV_BASE" = "singularity" ]; then
15
- export SINGULARITY_IMAGE=/projects/aieng/public/vector-inference_latest.sif
16
- export VLLM_NCCL_SO_PATH=/vec-inf/nccl/libnccl.so.2.18.1
17
- module load singularity-ce/3.8.2
18
- singularity exec $SINGULARITY_IMAGE ray stop
19
- fi
20
-
21
- # Getting the node names
22
- nodes=$(scontrol show hostnames "$SLURM_JOB_NODELIST")
23
- nodes_array=($nodes)
24
-
25
- head_node=${nodes_array[0]}
26
- head_node_ip=$(srun --nodes=1 --ntasks=1 -w "$head_node" hostname --ip-address)
27
-
28
- # Find port for head node
29
- head_node_port=$(find_available_port $head_node_ip 8080 65535)
30
-
31
- # Starting the Ray head node
32
- ip_head=$head_node_ip:$head_node_port
33
- export ip_head
34
- echo "IP Head: $ip_head"
35
-
36
- echo "Starting HEAD at $head_node"
37
- if [ "$VENV_BASE" = "singularity" ]; then
38
- srun --nodes=1 --ntasks=1 -w "$head_node" \
39
- singularity exec --nv --bind ${MODEL_WEIGHTS_PARENT_DIR}:${MODEL_WEIGHTS_PARENT_DIR} $SINGULARITY_IMAGE \
40
- ray start --head --node-ip-address="$head_node_ip" --port=$head_node_port \
41
- --num-cpus "${SLURM_CPUS_PER_TASK}" --num-gpus "${NUM_GPUS}" --block &
42
- else
43
- srun --nodes=1 --ntasks=1 -w "$head_node" \
44
- ray start --head --node-ip-address="$head_node_ip" --port=$head_node_port \
45
- --num-cpus "${SLURM_CPUS_PER_TASK}" --num-gpus "${NUM_GPUS}" --block &
46
- fi
47
-
48
- # Starting the Ray worker nodes
49
- # Optional, though may be useful in certain versions of Ray < 1.0.
50
- sleep 10
51
-
52
- # number of nodes other than the head node
53
- worker_num=$((SLURM_JOB_NUM_NODES - 1))
54
-
55
- for ((i = 1; i <= worker_num; i++)); do
56
- node_i=${nodes_array[$i]}
57
- echo "Starting WORKER $i at $node_i"
58
- if [ "$VENV_BASE" = "singularity" ]; then
59
- srun --nodes=1 --ntasks=1 -w "$node_i" \
60
- singularity exec --nv --bind ${MODEL_WEIGHTS_PARENT_DIR}:${MODEL_WEIGHTS_PARENT_DIR} $SINGULARITY_IMAGE \
61
- ray start --address "$ip_head" \
62
- --num-cpus "${SLURM_CPUS_PER_TASK}" --num-gpus "${NUM_GPUS}" --block &
63
- else
64
- srun --nodes=1 --ntasks=1 -w "$node_i" \
65
- ray start --address "$ip_head" \
66
- --num-cpus "${SLURM_CPUS_PER_TASK}" --num-gpus "${NUM_GPUS}" --block &
67
- fi
68
-
69
- sleep 5
70
- done
71
-
72
-
73
- vllm_port_number=$(find_available_port $head_node_ip 8080 65535)
74
-
75
- echo "Server address: http://${head_node_ip}:${vllm_port_number}/v1"
76
-
77
- if [ "$PIPELINE_PARALLELISM" = "True" ]; then
78
- export PIPELINE_PARALLEL_SIZE=$NUM_NODES
79
- export TENSOR_PARALLEL_SIZE=$NUM_GPUS
80
- else
81
- export PIPELINE_PARALLEL_SIZE=1
82
- export TENSOR_PARALLEL_SIZE=$((NUM_NODES*NUM_GPUS))
83
- fi
84
-
85
- if [ "$ENFORCE_EAGER" = "True" ]; then
86
- export ENFORCE_EAGER="--enforce-eager"
87
- else
88
- export ENFORCE_EAGER=""
89
- fi
90
-
91
- # Activate vllm venv
92
- if [ "$VENV_BASE" = "singularity" ]; then
93
- singularity exec --nv --bind ${MODEL_WEIGHTS_PARENT_DIR}:${MODEL_WEIGHTS_PARENT_DIR} $SINGULARITY_IMAGE \
94
- python3.10 -m vllm.entrypoints.openai.api_server \
95
- --model ${VLLM_MODEL_WEIGHTS} \
96
- --served-model-name ${JOB_NAME} \
97
- --host "0.0.0.0" \
98
- --port ${vllm_port_number} \
99
- --pipeline-parallel-size ${PIPELINE_PARALLEL_SIZE} \
100
- --tensor-parallel-size ${TENSOR_PARALLEL_SIZE} \
101
- --dtype ${VLLM_DATA_TYPE} \
102
- --trust-remote-code \
103
- --max-logprobs ${VLLM_MAX_LOGPROBS} \
104
- --max-model-len ${VLLM_MAX_MODEL_LEN} \
105
- --max-num-seqs ${VLLM_MAX_NUM_SEQS} \
106
- --task ${VLLM_TASK} \
107
- ${ENFORCE_EAGER}
108
- else
109
- source ${VENV_BASE}/bin/activate
110
- python3 -m vllm.entrypoints.openai.api_server \
111
- --model ${VLLM_MODEL_WEIGHTS} \
112
- --served-model-name ${JOB_NAME} \
113
- --host "0.0.0.0" \
114
- --port ${vllm_port_number} \
115
- --pipeline-parallel-size ${PIPELINE_PARALLEL_SIZE} \
116
- --tensor-parallel-size ${TENSOR_PARALLEL_SIZE} \
117
- --dtype ${VLLM_DATA_TYPE} \
118
- --trust-remote-code \
119
- --max-logprobs ${VLLM_MAX_LOGPROBS} \
120
- --max-model-len ${VLLM_MAX_MODEL_LEN} \
121
- --max-num-seqs ${VLLM_MAX_NUM_SEQS} \
122
- --task ${VLLM_TASK} \
123
- ${ENFORCE_EAGER}
124
- fi
vec_inf/vllm.slurm DELETED
@@ -1,59 +0,0 @@
1
- #!/bin/bash
2
- #SBATCH --cpus-per-task=16
3
- #SBATCH --mem=64G
4
-
5
- # Load CUDA, change to the cuda version on your environment if different
6
- source /opt/lmod/lmod/init/profile
7
- module load cuda-12.3
8
- nvidia-smi
9
-
10
- source ${SRC_DIR}/find_port.sh
11
-
12
- # Write server url to file
13
- hostname=${SLURMD_NODENAME}
14
- vllm_port_number=$(find_available_port $hostname 8080 65535)
15
-
16
- echo "Server address: http://${hostname}:${vllm_port_number}/v1"
17
-
18
- if [ "$ENFORCE_EAGER" = "True" ]; then
19
- export ENFORCE_EAGER="--enforce-eager"
20
- else
21
- export ENFORCE_EAGER=""
22
- fi
23
-
24
- # Activate vllm venv
25
- if [ "$VENV_BASE" = "singularity" ]; then
26
- export SINGULARITY_IMAGE=/projects/aieng/public/vector-inference_latest.sif
27
- export VLLM_NCCL_SO_PATH=/vec-inf/nccl/libnccl.so.2.18.1
28
- module load singularity-ce/3.8.2
29
- singularity exec $SINGULARITY_IMAGE ray stop
30
- singularity exec --nv --bind ${MODEL_WEIGHTS_PARENT_DIR}:${MODEL_WEIGHTS_PARENT_DIR} $SINGULARITY_IMAGE \
31
- python3.10 -m vllm.entrypoints.openai.api_server \
32
- --model ${VLLM_MODEL_WEIGHTS} \
33
- --served-model-name ${JOB_NAME} \
34
- --host "0.0.0.0" \
35
- --port ${vllm_port_number} \
36
- --tensor-parallel-size ${NUM_GPUS} \
37
- --dtype ${VLLM_DATA_TYPE} \
38
- --max-logprobs ${VLLM_MAX_LOGPROBS} \
39
- --trust-remote-code \
40
- --max-model-len ${VLLM_MAX_MODEL_LEN} \
41
- --max-num-seqs ${VLLM_MAX_NUM_SEQS} \
42
- --task ${VLLM_TASK} \
43
- ${ENFORCE_EAGER}
44
- else
45
- source ${VENV_BASE}/bin/activate
46
- python3 -m vllm.entrypoints.openai.api_server \
47
- --model ${VLLM_MODEL_WEIGHTS} \
48
- --served-model-name ${JOB_NAME} \
49
- --host "0.0.0.0" \
50
- --port ${vllm_port_number} \
51
- --tensor-parallel-size ${NUM_GPUS} \
52
- --dtype ${VLLM_DATA_TYPE} \
53
- --max-logprobs ${VLLM_MAX_LOGPROBS} \
54
- --trust-remote-code \
55
- --max-model-len ${VLLM_MAX_MODEL_LEN} \
56
- --max-num-seqs ${VLLM_MAX_NUM_SEQS} \
57
- --task ${VLLM_TASK} \
58
- ${ENFORCE_EAGER}
59
- fi