vec-inf 0.6.1__py3-none-any.whl → 0.7.1__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: vec-inf
3
- Version: 0.6.1
3
+ Version: 0.7.1
4
4
  Summary: Efficient LLM inference on Slurm clusters using vLLM.
5
5
  Author-email: Marshall Wang <marshall.wang@vectorinstitute.ai>
6
6
  License-Expression: MIT
@@ -14,9 +14,9 @@ Requires-Dist: rich>=13.7.0
14
14
  Provides-Extra: dev
15
15
  Requires-Dist: cupy-cuda12x==12.1.0; extra == 'dev'
16
16
  Requires-Dist: ray>=2.40.0; extra == 'dev'
17
- Requires-Dist: torch>=2.5.1; extra == 'dev'
17
+ Requires-Dist: torch>=2.7.0; extra == 'dev'
18
18
  Requires-Dist: vllm-nccl-cu12<2.19,>=2.18; extra == 'dev'
19
- Requires-Dist: vllm>=0.7.3; extra == 'dev'
19
+ Requires-Dist: vllm>=0.10.0; extra == 'dev'
20
20
  Requires-Dist: xgrammar>=0.1.11; extra == 'dev'
21
21
  Description-Content-Type: text/markdown
22
22
 
@@ -29,10 +29,12 @@ Description-Content-Type: text/markdown
29
29
  [![code checks](https://github.com/VectorInstitute/vector-inference/actions/workflows/code_checks.yml/badge.svg)](https://github.com/VectorInstitute/vector-inference/actions/workflows/code_checks.yml)
30
30
  [![docs](https://github.com/VectorInstitute/vector-inference/actions/workflows/docs.yml/badge.svg)](https://github.com/VectorInstitute/vector-inference/actions/workflows/docs.yml)
31
31
  [![codecov](https://codecov.io/github/VectorInstitute/vector-inference/branch/main/graph/badge.svg?token=NI88QSIGAC)](https://app.codecov.io/github/VectorInstitute/vector-inference/tree/main)
32
- [![vLLM](https://img.shields.io/badge/vllm-0.8.5.post1-blue)](https://docs.vllm.ai/en/v0.8.5.post1/index.html)
32
+ [![vLLM](https://img.shields.io/badge/vLLM-0.10.1.1-blue)](https://docs.vllm.ai/en/v0.10.1.1/)
33
33
  ![GitHub License](https://img.shields.io/github/license/VectorInstitute/vector-inference)
34
34
 
35
- This repository provides an easy-to-use solution to run inference servers on [Slurm](https://slurm.schedmd.com/overview.html)-managed computing clusters using [vLLM](https://docs.vllm.ai/en/latest/). **All scripts in this repository runs natively on the Vector Institute cluster environment**. To adapt to other environments, update the environment variables in [`vec_inf/client/slurm_vars.py`](vec_inf/client/slurm_vars.py), and the model config for cached model weights in [`vec_inf/config/models.yaml`](vec_inf/config/models.yaml) accordingly.
35
+ This repository provides an easy-to-use solution to run inference servers on [Slurm](https://slurm.schedmd.com/overview.html)-managed computing clusters using [vLLM](https://docs.vllm.ai/en/latest/). **This package runs natively on the Vector Institute cluster environments**. To adapt to other environments, follow the instructions in [Installation](#installation).
36
+
37
+ **NOTE**: Supported models on Killarney are tracked [here](./MODEL_TRACKING.md)
36
38
 
37
39
  ## Installation
38
40
  If you are using the Vector cluster environment, and you don't need any customization to the inference server environment, run the following to install package:
@@ -40,7 +42,12 @@ If you are using the Vector cluster environment, and you don't need any customiz
40
42
  ```bash
41
43
  pip install vec-inf
42
44
  ```
43
- Otherwise, we recommend using the provided [`Dockerfile`](Dockerfile) to set up your own environment with the package. The latest image has `vLLM` version `0.8.5.post1`.
45
+ Otherwise, we recommend using the provided [`Dockerfile`](Dockerfile) to set up your own environment with the package. The latest image has `vLLM` version `0.10.1.1`.
46
+
47
+ If you'd like to use `vec-inf` on your own Slurm cluster, you would need to update the configuration files, there are 3 ways to do it:
48
+ * Clone the repository and update the `environment.yaml` and the `models.yaml` file in [`vec_inf/config`](vec_inf/config/), then install from source by running `pip install .`.
49
+ * The package would try to look for cached configuration files in your environment before using the default configuration. The default cached configuration directory path points to `/model-weights/vec-inf-shared`, you would need to create an `environment.yaml` and a `models.yaml` following the format of these files in [`vec_inf/config`](vec_inf/config/).
50
+ * The package would also look for an enviroment variable `VEC_INF_CONFIG_DIR`. You can put your `environment.yaml` and `models.yaml` in a directory of your choice and set the enviroment variable `VEC_INF_CONFIG_DIR` to point to that location.
44
51
 
45
52
  ## Usage
46
53
 
@@ -57,78 +64,22 @@ vec-inf launch Meta-Llama-3.1-8B-Instruct
57
64
  ```
58
65
  You should see an output like the following:
59
66
 
60
- <img width="600" alt="launch_image" src="https://github.com/user-attachments/assets/a72a99fd-4bf2-408e-8850-359761d96c4f">
61
-
67
+ <img width="720" alt="launch_image" src="https://github.com/user-attachments/assets/c1e0c60c-cf7a-49ed-a426-fdb38ebf88ee" />
62
68
 
63
- #### Overrides
69
+ **NOTE**: You can set the required fields in the environment configuration (`environment.yaml`), it's a mapping between required arguments and their corresponding environment variables. On the Vector **Killarney** Cluster environment, the required fields are:
70
+ * `--account`, `-A`: The Slurm account, this argument can be set to default by setting environment variable `VEC_INF_ACCOUNT`.
71
+ * `--work-dir`, `-D`: A working directory other than your home directory, this argument can be set to default by seeting environment variable `VEC_INF_WORK_DIR`.
64
72
 
65
- Models that are already supported by `vec-inf` would be launched using the cached configuration (set in [slurm_vars.py](vec_inf/client/slurm_vars.py)) or [default configuration](vec_inf/config/models.yaml). You can override these values by providing additional parameters. Use `vec-inf launch --help` to see the full list of parameters that can be
66
- overriden. For example, if `qos` is to be overriden:
67
-
68
- ```bash
69
- vec-inf launch Meta-Llama-3.1-8B-Instruct --qos <new_qos>
70
- ```
71
-
72
- To overwrite default vLLM engine arguments, you can specify the engine arguments in a comma separated string:
73
-
74
- ```bash
75
- vec-inf launch Meta-Llama-3.1-8B-Instruct --vllm-args '--max-model-len=65536,--compilation-config=3'
76
- ```
77
-
78
- For the full list of vLLM engine arguments, you can find them [here](https://docs.vllm.ai/en/stable/serving/engine_args.html), make sure you select the correct vLLM version.
79
-
80
- #### Custom models
81
-
82
- You can also launch your own custom model as long as the model architecture is [supported by vLLM](https://docs.vllm.ai/en/stable/models/supported_models.html), and make sure to follow the instructions below:
83
- * Your model weights directory naming convention should follow `$MODEL_FAMILY-$MODEL_VARIANT` ($MODEL_VARIANT is OPTIONAL).
84
- * Your model weights directory should contain HuggingFace format weights.
85
- * You should specify your model configuration by:
86
- * Creating a custom configuration file for your model and specify its path via setting the environment variable `VEC_INF_CONFIG`. Check the [default parameters](vec_inf/config/models.yaml) file for the format of the config file. All the parameters for the model should be specified in that config file.
87
- * Using launch command options to specify your model setup.
88
- * For other model launch parameters you can reference the default values for similar models using the [`list` command ](#list-command).
89
-
90
- Here is an example to deploy a custom [Qwen2.5-7B-Instruct-1M](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct-1M) model which is not
91
- supported in the default list of models using a user custom config. In this case, the model weights are assumed to be downloaded to
92
- a `model-weights` directory inside the user's home directory. The weights directory of the model follows the naming convention so it
93
- would be named `Qwen2.5-7B-Instruct-1M`. The following yaml file would need to be created, lets say it is named `/h/<username>/my-model-config.yaml`.
94
-
95
- ```yaml
96
- models:
97
- Qwen2.5-7B-Instruct-1M:
98
- model_family: Qwen2.5
99
- model_variant: 7B-Instruct-1M
100
- model_type: LLM
101
- gpus_per_node: 1
102
- num_nodes: 1
103
- vocab_size: 152064
104
- qos: m2
105
- time: 08:00:00
106
- partition: a40
107
- model_weights_parent_dir: /h/<username>/model-weights
108
- vllm_args:
109
- --max-model-len: 1010000
110
- --max-num-seqs: 256
111
- --compilation-config: 3
112
- ```
113
-
114
- You would then set the `VEC_INF_CONFIG` path using:
115
-
116
- ```bash
117
- export VEC_INF_CONFIG=/h/<username>/my-model-config.yaml
118
- ```
119
-
120
- **NOTE**
121
- * There are other parameters that can also be added to the config but not shown in this example, check the [`ModelConfig`](vec_inf/client/config.py) for details.
122
- * Check [vLLM Engine Arguments](https://docs.vllm.ai/en/stable/serving/engine_args.html) for the full list of available vLLM engine arguments, the default parallel size for any parallelization is default to 1, so none of the sizes were set specifically in this example
123
- * For GPU partitions with non-Ampere architectures, e.g. `rtx6000`, `t4v2`, BF16 isn't supported. For models that have BF16 as the default type, when using a non-Ampere GPU, use FP16 instead, i.e. `--dtype: float16`
124
- * Setting `--compilation-config` to `3` currently breaks multi-node model launches, so we don't set them for models that require multiple nodes of GPUs.
73
+ Models that are already supported by `vec-inf` would be launched using the cached configuration (set in [slurm_vars.py](vec_inf/client/slurm_vars.py)) or [default configuration](vec_inf/config/models.yaml). You can override these values by providing additional parameters. Use `vec-inf launch --help` to see the full list of parameters that can be overriden. You can also launch your own custom model as long as the model architecture is [supported by vLLM](https://docs.vllm.ai/en/stable/models/supported_models.html). For detailed instructions on how to customize your model launch, check out the [`launch` command section in User Guide](https://vectorinstitute.github.io/vector-inference/latest/user_guide/#launch-command)
125
74
 
126
75
  #### Other commands
127
76
 
128
- * `status`: Check the model status by providing its Slurm job ID, `--json-mode` supported.
77
+ * `batch-launch`: Launch multiple model inference servers at once, currently ONLY single node models supported,
78
+ * `status`: Check the model status by providing its Slurm job ID.
129
79
  * `metrics`: Streams performance metrics to the console.
130
80
  * `shutdown`: Shutdown a model by providing its Slurm job ID.
131
- * `list`: List all available model names, or view the default/cached configuration of a specific model, `--json-mode` supported.
81
+ * `list`: List all available model names, or view the default/cached configuration of a specific model.
82
+ * `cleanup`: Remove old log directories, use `--help` to see the supported filters. Use `--dry-run` to preview what would be deleted.
132
83
 
133
84
  For more details on the usage of these commands, refer to the [User Guide](https://vectorinstitute.github.io/vector-inference/user_guide/)
134
85
 
@@ -139,11 +90,17 @@ Example:
139
90
  ```python
140
91
  >>> from vec_inf.api import VecInfClient
141
92
  >>> client = VecInfClient()
93
+ >>> # Assume VEC_INF_ACCOUNT and VEC_INF_WORK_DIR is set
142
94
  >>> response = client.launch_model("Meta-Llama-3.1-8B-Instruct")
143
95
  >>> job_id = response.slurm_job_id
144
96
  >>> status = client.get_status(job_id)
145
97
  >>> if status.status == ModelStatus.READY:
146
98
  ... print(f"Model is ready at {status.base_url}")
99
+ >>> # Alternatively, use wait_until_ready which will either return a StatusResponse or throw a ServerError
100
+ >>> try:
101
+ >>> status = wait_until_ready(job_id)
102
+ >>> except ServerError as e:
103
+ >>> print(f"Model launch failed: {e}")
147
104
  >>> client.shutdown_model(job_id)
148
105
  ```
149
106
 
@@ -194,6 +151,19 @@ Once the inference server is ready, you can start sending in inference requests.
194
151
  ## SSH tunnel from your local device
195
152
  If you want to run inference from your local device, you can open a SSH tunnel to your cluster environment like the following:
196
153
  ```bash
197
- ssh -L 8081:172.17.8.29:8081 username@v.vectorinstitute.ai -N
154
+ ssh -L 8081:10.1.1.29:8081 username@v.vectorinstitute.ai -N
155
+ ```
156
+ The example provided above is for the Vector Killarney cluster, change the variables accordingly for your environment. The IP address for the compute nodes on Killarney follow `10.1.1.XX` pattern, where `XX` is the GPU number (`kn029` -> `29` in this example).
157
+
158
+ ## Reference
159
+ If you found Vector Inference useful in your research or applications, please cite using the following BibTeX template:
160
+ ```
161
+ @software{vector_inference,
162
+ title = {Vector Inference: Efficient LLM inference on Slurm clusters using vLLM},
163
+ author = {Wang, Marshall},
164
+ organization = {Vector Institute},
165
+ year = {<YEAR_OF_RELEASE>},
166
+ version = {<VERSION_TAG>},
167
+ url = {https://github.com/VectorInstitute/vector-inference}
168
+ }
198
169
  ```
199
- Where the last number in the URL is the GPU number (gpu029 in this case). The example provided above is for the vector cluster, change the variables accordingly for your environment
@@ -0,0 +1,27 @@
1
+ vec_inf/README.md,sha256=WyvjbSs5Eh5fp8u66bgOaO3FQKP2U7m_HbLgqTHs_ng,1322
2
+ vec_inf/__init__.py,sha256=bHwSIz9lebYuxIemni-lP0h3gwJHVbJnwExQKGJWw_Q,23
3
+ vec_inf/find_port.sh,sha256=bGQ6LYSFVSsfDIGatrSg5YvddbZfaPL0R-Bjo4KYD6I,1088
4
+ vec_inf/cli/__init__.py,sha256=5XIvGQCOnaGl73XMkwetjC-Ul3xuXGrWDXdYJ3aUzvU,27
5
+ vec_inf/cli/_cli.py,sha256=xrYce8iP2Wo5dNflvUO2gIfkyjA4V_V8mpiaxnMDwkk,15813
6
+ vec_inf/cli/_helper.py,sha256=Jr9NnMhGflkx3YEfYCN1rMHQgUzMAAwlSx_BLH92tVM,16511
7
+ vec_inf/cli/_utils.py,sha256=23vSbmvNOWY1-W1aOAwYqNDkDDmx-5UVlCiXAtxUZ8A,1057
8
+ vec_inf/cli/_vars.py,sha256=V6DrJs_BuUa4yNcbBSSnMwpcyXwEBsizy3D0ubIg2fA,777
9
+ vec_inf/client/__init__.py,sha256=OLlUJ4kL1R-Kh-nXNbvKlAZ3mtHcnozHprVufkVCNWk,739
10
+ vec_inf/client/_client_vars.py,sha256=1D-bX9dS0-pFImLvgWt2hUnwJiz-VaxuLb2HIfPML8I,2408
11
+ vec_inf/client/_exceptions.py,sha256=94Nx_5k1SriJNXzbdnwyXFZolyMutydU08Gsikawzzo,749
12
+ vec_inf/client/_helper.py,sha256=P8A9JHRMzxJRl0dgTuv9xfOluEV3BthUM1KzQlWkR7E,35752
13
+ vec_inf/client/_slurm_script_generator.py,sha256=d2NowdKMQR1lsVI_hw9ObKC3uSk8YJr75ZYRMkvp0RA,13354
14
+ vec_inf/client/_slurm_templates.py,sha256=TAH-wQV4gP2CCwxP3BmShebohtSmlMstlJT9QK6n4Dc,8277
15
+ vec_inf/client/_slurm_vars.py,sha256=sgP__XhpE1K7pvOzVFmotUXmINYPcOuFP-zGaePT5Iw,2910
16
+ vec_inf/client/_utils.py,sha256=XamAz8-AJELgkXHrR082ptTsbHSiWI47SY6MlXA44rU,12593
17
+ vec_inf/client/api.py,sha256=pkgNE37r7LzYBDjRGAKAh7rhOUMKHGwghJh6Hfb45TI,11681
18
+ vec_inf/client/config.py,sha256=VU4h2iqL0rxYAqGw2HBF_l6QvvSDJy5M79IgX5G2PW4,5830
19
+ vec_inf/client/models.py,sha256=qxLxsVoEhxNkuCmtABqs8In5erkwTZDK0wih7U2_U38,7296
20
+ vec_inf/config/README.md,sha256=TvZOqZyTUaAFr71hC7GVgg6QUw80AXREyq8wS4D-F30,528
21
+ vec_inf/config/environment.yaml,sha256=oEDp85hUERJO9NNn4wYhcgunnmkln50GNHDzG_3isMw,678
22
+ vec_inf/config/models.yaml,sha256=vzAOqEu6M_lXput83MAhNzj-aNGSBzjbC6LydOmNqxk,26248
23
+ vec_inf-0.7.1.dist-info/METADATA,sha256=CJEnzc3VLXxJ_00I1ubtwNNZQjvafddxlJyoi_bSwpo,10047
24
+ vec_inf-0.7.1.dist-info/WHEEL,sha256=qtCwoSJWgHk21S1Kb4ihdzI2rlJ1ZKaIurTj_ngOhyQ,87
25
+ vec_inf-0.7.1.dist-info/entry_points.txt,sha256=uNRXjCuJSR2nveEqD3IeMznI9oVI9YLZh5a24cZg6B0,49
26
+ vec_inf-0.7.1.dist-info/licenses/LICENSE,sha256=mq8zeqpvVSF1EsxmydeXcokt8XnEIfSofYn66S2-cJI,1073
27
+ vec_inf-0.7.1.dist-info/RECORD,,
@@ -1,49 +0,0 @@
1
- """Slurm cluster configuration variables."""
2
-
3
- from pathlib import Path
4
-
5
- from typing_extensions import Literal
6
-
7
-
8
- CACHED_CONFIG = Path("/", "model-weights", "vec-inf-shared", "models_latest.yaml")
9
- LD_LIBRARY_PATH = "/scratch/ssd001/pkgs/cudnn-11.7-v8.5.0.96/lib/:/scratch/ssd001/pkgs/cuda-11.7/targets/x86_64-linux/lib/"
10
- SINGULARITY_IMAGE = "/model-weights/vec-inf-shared/vector-inference_latest.sif"
11
- SINGULARITY_LOAD_CMD = "module load singularity-ce/3.8.2"
12
- VLLM_NCCL_SO_PATH = "/vec-inf/nccl/libnccl.so.2.18.1"
13
- MAX_GPUS_PER_NODE = 8
14
- MAX_NUM_NODES = 16
15
- MAX_CPUS_PER_TASK = 128
16
-
17
- QOS = Literal[
18
- "normal",
19
- "m",
20
- "m2",
21
- "m3",
22
- "m4",
23
- "m5",
24
- "long",
25
- "deadline",
26
- "high",
27
- "scavenger",
28
- "llm",
29
- "a100",
30
- ]
31
-
32
- PARTITION = Literal[
33
- "a40",
34
- "a100",
35
- "t4v1",
36
- "t4v2",
37
- "rtx6000",
38
- ]
39
-
40
- DEFAULT_ARGS = {
41
- "cpus_per_task": 16,
42
- "mem_per_node": "64G",
43
- "qos": "m2",
44
- "time": "08:00:00",
45
- "partition": "a40",
46
- "data_type": "auto",
47
- "log_dir": "~/.vec-inf-logs",
48
- "model_weights_parent_dir": "/model-weights",
49
- }
@@ -1,25 +0,0 @@
1
- vec_inf/README.md,sha256=3ocJHfV3kRftXFUCdHw3B-p4QQlXuNqkHnjPPNkCgfM,543
2
- vec_inf/__init__.py,sha256=bHwSIz9lebYuxIemni-lP0h3gwJHVbJnwExQKGJWw_Q,23
3
- vec_inf/find_port.sh,sha256=bGQ6LYSFVSsfDIGatrSg5YvddbZfaPL0R-Bjo4KYD6I,1088
4
- vec_inf/cli/__init__.py,sha256=5XIvGQCOnaGl73XMkwetjC-Ul3xuXGrWDXdYJ3aUzvU,27
5
- vec_inf/cli/_cli.py,sha256=pqZeQr5WxAsV7KSYcUnx_mRL7RnHWk1zf9CcW_ct5uI,10663
6
- vec_inf/cli/_helper.py,sha256=i1QvJeIT3z7me6bv2Vot5c3NY555Dgo3q8iRlxhOlZ4,13047
7
- vec_inf/cli/_utils.py,sha256=23vSbmvNOWY1-W1aOAwYqNDkDDmx-5UVlCiXAtxUZ8A,1057
8
- vec_inf/cli/_vars.py,sha256=V6DrJs_BuUa4yNcbBSSnMwpcyXwEBsizy3D0ubIg2fA,777
9
- vec_inf/client/__init__.py,sha256=OLlUJ4kL1R-Kh-nXNbvKlAZ3mtHcnozHprVufkVCNWk,739
10
- vec_inf/client/_client_vars.py,sha256=KG-xImVIzJH3aj5nMUzT9w9LpH-7YGrOew6N77Fj0Js,7638
11
- vec_inf/client/_exceptions.py,sha256=94Nx_5k1SriJNXzbdnwyXFZolyMutydU08Gsikawzzo,749
12
- vec_inf/client/_helper.py,sha256=DcEFogbrSb4A8Kc2zixNZNL4nt4iswPk2n5blZgwEWQ,22338
13
- vec_inf/client/_slurm_script_generator.py,sha256=XYCsadCLDEu9KrrjrNCNgoc0ITmjys9u7yWR9PkFAos,6376
14
- vec_inf/client/_utils.py,sha256=1dB2O1neEhZNk6MJbBybLQm42vsmEevA2TI0F_kGi0o,8796
15
- vec_inf/client/api.py,sha256=TYn4lP5Ene8MEuXWYo6ZbGYw9aPnaMlT32SH7jLCifM,9605
16
- vec_inf/client/config.py,sha256=lPVHwiaGZjKd5M9G7vcsk3DMausFP_telq3JQngBkH8,5080
17
- vec_inf/client/models.py,sha256=qjocUa5egJTVeVF3962kYOecs1dTaEb2e6TswkYFXM0,6141
18
- vec_inf/client/slurm_vars.py,sha256=lroK41L4gEVVZNxxE3bEpbKsdMwnH79-7iCKd4zWEa4,1069
19
- vec_inf/config/README.md,sha256=OlgnD_Ojei_xLkNyS7dGvYMFUzQFqjVRVw0V-QMk_3g,17863
20
- vec_inf/config/models.yaml,sha256=xImSOjG9yL6LqqYkSLL7_wBZhqKM10-eFaQJ82gP4ig,29420
21
- vec_inf-0.6.1.dist-info/METADATA,sha256=0YHT8rhEZINfmMF1hQBqU0HBpRbwX-1IeqY_Mla4g28,10682
22
- vec_inf-0.6.1.dist-info/WHEEL,sha256=qtCwoSJWgHk21S1Kb4ihdzI2rlJ1ZKaIurTj_ngOhyQ,87
23
- vec_inf-0.6.1.dist-info/entry_points.txt,sha256=uNRXjCuJSR2nveEqD3IeMznI9oVI9YLZh5a24cZg6B0,49
24
- vec_inf-0.6.1.dist-info/licenses/LICENSE,sha256=mq8zeqpvVSF1EsxmydeXcokt8XnEIfSofYn66S2-cJI,1073
25
- vec_inf-0.6.1.dist-info/RECORD,,