vec-inf 0.6.0__py3-none-any.whl → 0.7.0__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: vec-inf
3
- Version: 0.6.0
3
+ Version: 0.7.0
4
4
  Summary: Efficient LLM inference on Slurm clusters using vLLM.
5
5
  Author-email: Marshall Wang <marshall.wang@vectorinstitute.ai>
6
6
  License-Expression: MIT
@@ -14,9 +14,9 @@ Requires-Dist: rich>=13.7.0
14
14
  Provides-Extra: dev
15
15
  Requires-Dist: cupy-cuda12x==12.1.0; extra == 'dev'
16
16
  Requires-Dist: ray>=2.40.0; extra == 'dev'
17
- Requires-Dist: torch>=2.5.1; extra == 'dev'
17
+ Requires-Dist: torch>=2.7.0; extra == 'dev'
18
18
  Requires-Dist: vllm-nccl-cu12<2.19,>=2.18; extra == 'dev'
19
- Requires-Dist: vllm>=0.7.3; extra == 'dev'
19
+ Requires-Dist: vllm>=0.10.0; extra == 'dev'
20
20
  Requires-Dist: xgrammar>=0.1.11; extra == 'dev'
21
21
  Description-Content-Type: text/markdown
22
22
 
@@ -29,9 +29,12 @@ Description-Content-Type: text/markdown
29
29
  [![code checks](https://github.com/VectorInstitute/vector-inference/actions/workflows/code_checks.yml/badge.svg)](https://github.com/VectorInstitute/vector-inference/actions/workflows/code_checks.yml)
30
30
  [![docs](https://github.com/VectorInstitute/vector-inference/actions/workflows/docs.yml/badge.svg)](https://github.com/VectorInstitute/vector-inference/actions/workflows/docs.yml)
31
31
  [![codecov](https://codecov.io/github/VectorInstitute/vector-inference/branch/main/graph/badge.svg?token=NI88QSIGAC)](https://app.codecov.io/github/VectorInstitute/vector-inference/tree/main)
32
+ [![vLLM](https://img.shields.io/badge/vLLM-0.10.1.1-blue)](https://docs.vllm.ai/en/v0.10.1.1/)
32
33
  ![GitHub License](https://img.shields.io/github/license/VectorInstitute/vector-inference)
33
34
 
34
- This repository provides an easy-to-use solution to run inference servers on [Slurm](https://slurm.schedmd.com/overview.html)-managed computing clusters using [vLLM](https://docs.vllm.ai/en/latest/). **All scripts in this repository runs natively on the Vector Institute cluster environment**. To adapt to other environments, update the environment variables in [`vec_inf/client/slurm_vars.py`](vec_inf/client/slurm_vars.py), and the model config for cached model weights in [`vec_inf/config/models.yaml`](vec_inf/config/models.yaml) accordingly.
35
+ This repository provides an easy-to-use solution to run inference servers on [Slurm](https://slurm.schedmd.com/overview.html)-managed computing clusters using [vLLM](https://docs.vllm.ai/en/latest/). **This package runs natively on the Vector Institute cluster environments**. To adapt to other environments, follow the instructions in [Installation](#installation).
36
+
37
+ **NOTE**: Supported models on Killarney are tracked [here](./MODEL_TRACKING.md)
35
38
 
36
39
  ## Installation
37
40
  If you are using the Vector cluster environment, and you don't need any customization to the inference server environment, run the following to install package:
@@ -39,7 +42,12 @@ If you are using the Vector cluster environment, and you don't need any customiz
39
42
  ```bash
40
43
  pip install vec-inf
41
44
  ```
42
- Otherwise, we recommend using the provided [`Dockerfile`](Dockerfile) to set up your own environment with the package
45
+ Otherwise, we recommend using the provided [`Dockerfile`](Dockerfile) to set up your own environment with the package. The latest image has `vLLM` version `0.10.1.1`.
46
+
47
+ If you'd like to use `vec-inf` on your own Slurm cluster, you would need to update the configuration files, there are 3 ways to do it:
48
+ * Clone the repository and update the `environment.yaml` and the `models.yaml` file in [`vec_inf/config`](vec_inf/config/), then install from source by running `pip install .`.
49
+ * The package would try to look for cached configuration files in your environment before using the default configuration. The default cached configuration directory path points to `/model-weights/vec-inf-shared`, you would need to create an `environment.yaml` and a `models.yaml` following the format of these files in [`vec_inf/config`](vec_inf/config/).
50
+ * The package would also look for an enviroment variable `VEC_INF_CONFIG_DIR`. You can put your `environment.yaml` and `models.yaml` in a directory of your choice and set the enviroment variable `VEC_INF_CONFIG_DIR` to point to that location.
43
51
 
44
52
  ## Usage
45
53
 
@@ -56,74 +64,22 @@ vec-inf launch Meta-Llama-3.1-8B-Instruct
56
64
  ```
57
65
  You should see an output like the following:
58
66
 
59
- <img width="600" alt="launch_image" src="https://github.com/user-attachments/assets/a72a99fd-4bf2-408e-8850-359761d96c4f">
60
-
61
-
62
- #### Overrides
63
-
64
- Models that are already supported by `vec-inf` would be launched using the cached configuration (set in [slurm_vars.py](vec_inf/client/slurm_vars.py)) or [default configuration](vec_inf/config/models.yaml). You can override these values by providing additional parameters. Use `vec-inf launch --help` to see the full list of parameters that can be
65
- overriden. For example, if `qos` is to be overriden:
66
-
67
- ```bash
68
- vec-inf launch Meta-Llama-3.1-8B-Instruct --qos <new_qos>
69
- ```
70
-
71
- To overwrite default vLLM engine arguments, you can specify the engine arguments in a comma separated string:
67
+ <img width="720" alt="launch_image" src="https://github.com/user-attachments/assets/c1e0c60c-cf7a-49ed-a426-fdb38ebf88ee" />
72
68
 
73
- ```bash
74
- vec-inf launch Meta-Llama-3.1-8B-Instruct --vllm-args '--max-model-len=65536,--compilation-config=3'
75
- ```
69
+ **NOTE**: On Vector Killarney Cluster environment, the following fields are required:
70
+ * `--account`, `-A`: The Slurm account, this argument can be set to default by setting environment variable `VEC_INF_ACCOUNT`.
71
+ * `--work-dir`, `-D`: A working directory other than your home directory, this argument can be set to default by seeting environment variable `VEC_INF_WORK_DIR`.
76
72
 
77
- For the full list of vLLM engine arguments, you can find them [here](https://docs.vllm.ai/en/stable/serving/engine_args.html), make sure you select the correct vLLM version.
78
-
79
- #### Custom models
80
-
81
- You can also launch your own custom model as long as the model architecture is [supported by vLLM](https://docs.vllm.ai/en/stable/models/supported_models.html), and make sure to follow the instructions below:
82
- * Your model weights directory naming convention should follow `$MODEL_FAMILY-$MODEL_VARIANT` ($MODEL_VARIANT is OPTIONAL).
83
- * Your model weights directory should contain HuggingFace format weights.
84
- * You should specify your model configuration by:
85
- * Creating a custom configuration file for your model and specify its path via setting the environment variable `VEC_INF_CONFIG`. Check the [default parameters](vec_inf/config/models.yaml) file for the format of the config file. All the parameters for the model should be specified in that config file.
86
- * Using launch command options to specify your model setup.
87
- * For other model launch parameters you can reference the default values for similar models using the [`list` command ](#list-command).
88
-
89
- Here is an example to deploy a custom [Qwen2.5-7B-Instruct-1M](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct-1M) model which is not
90
- supported in the default list of models using a user custom config. In this case, the model weights are assumed to be downloaded to
91
- a `model-weights` directory inside the user's home directory. The weights directory of the model follows the naming convention so it
92
- would be named `Qwen2.5-7B-Instruct-1M`. The following yaml file would need to be created, lets say it is named `/h/<username>/my-model-config.yaml`.
93
-
94
- ```yaml
95
- models:
96
- Qwen2.5-7B-Instruct-1M:
97
- model_family: Qwen2.5
98
- model_variant: 7B-Instruct-1M
99
- model_type: LLM
100
- gpus_per_node: 1
101
- num_nodes: 1
102
- vocab_size: 152064
103
- qos: m2
104
- time: 08:00:00
105
- partition: a40
106
- model_weights_parent_dir: /h/<username>/model-weights
107
- vllm_args:
108
- --max-model-len: 1010000
109
- --max-num-seqs: 256
110
- --compilation-confi: 3
111
- ```
112
-
113
- You would then set the `VEC_INF_CONFIG` path using:
114
-
115
- ```bash
116
- export VEC_INF_CONFIG=/h/<username>/my-model-config.yaml
117
- ```
118
-
119
- Note that there are other parameters that can also be added to the config but not shown in this example, check the [`ModelConfig`](vec_inf/client/config.py) for details.
73
+ Models that are already supported by `vec-inf` would be launched using the cached configuration (set in [slurm_vars.py](vec_inf/client/slurm_vars.py)) or [default configuration](vec_inf/config/models.yaml). You can override these values by providing additional parameters. Use `vec-inf launch --help` to see the full list of parameters that can be overriden. You can also launch your own custom model as long as the model architecture is [supported by vLLM](https://docs.vllm.ai/en/stable/models/supported_models.html). For detailed instructions on how to customize your model launch, check out the [`launch` command section in User Guide](https://vectorinstitute.github.io/vector-inference/latest/user_guide/#launch-command)
120
74
 
121
75
  #### Other commands
122
76
 
123
- * `status`: Check the model status by providing its Slurm job ID, `--json-mode` supported.
77
+ * `batch-launch`: Launch multiple model inference servers at once, currently ONLY single node models supported,
78
+ * `status`: Check the model status by providing its Slurm job ID.
124
79
  * `metrics`: Streams performance metrics to the console.
125
80
  * `shutdown`: Shutdown a model by providing its Slurm job ID.
126
- * `list`: List all available model names, or view the default/cached configuration of a specific model, `--json-mode` supported.
81
+ * `list`: List all available model names, or view the default/cached configuration of a specific model.
82
+ * `cleanup`: Remove old log directories, use `--help` to see the supported filters. Use `--dry-run` to preview what would be deleted.
127
83
 
128
84
  For more details on the usage of these commands, refer to the [User Guide](https://vectorinstitute.github.io/vector-inference/user_guide/)
129
85
 
@@ -134,6 +90,7 @@ Example:
134
90
  ```python
135
91
  >>> from vec_inf.api import VecInfClient
136
92
  >>> client = VecInfClient()
93
+ >>> # Assume VEC_INF_ACCOUNT and VEC_INF_WORK_DIR is set
137
94
  >>> response = client.launch_model("Meta-Llama-3.1-8B-Instruct")
138
95
  >>> job_id = response.slurm_job_id
139
96
  >>> status = client.get_status(job_id)
@@ -182,8 +139,9 @@ Once the inference server is ready, you can start sending in inference requests.
182
139
  },
183
140
  "prompt_logprobs":null
184
141
  }
142
+
185
143
  ```
186
- **NOTE**: For multimodal models, currently only `ChatCompletion` is available, and only one image can be provided for each prompt.
144
+ **NOTE**: Certain models don't adhere to OpenAI's chat template, e.g. Mistral family. For these models, you can either change your prompt to follow the model's default chat template or provide your own chat template via `--chat-template: TEMPLATE_PATH`.
187
145
 
188
146
  ## SSH tunnel from your local device
189
147
  If you want to run inference from your local device, you can open a SSH tunnel to your cluster environment like the following:
@@ -0,0 +1,27 @@
1
+ vec_inf/README.md,sha256=WyvjbSs5Eh5fp8u66bgOaO3FQKP2U7m_HbLgqTHs_ng,1322
2
+ vec_inf/__init__.py,sha256=bHwSIz9lebYuxIemni-lP0h3gwJHVbJnwExQKGJWw_Q,23
3
+ vec_inf/find_port.sh,sha256=bGQ6LYSFVSsfDIGatrSg5YvddbZfaPL0R-Bjo4KYD6I,1088
4
+ vec_inf/cli/__init__.py,sha256=5XIvGQCOnaGl73XMkwetjC-Ul3xuXGrWDXdYJ3aUzvU,27
5
+ vec_inf/cli/_cli.py,sha256=xrYce8iP2Wo5dNflvUO2gIfkyjA4V_V8mpiaxnMDwkk,15813
6
+ vec_inf/cli/_helper.py,sha256=Jr9NnMhGflkx3YEfYCN1rMHQgUzMAAwlSx_BLH92tVM,16511
7
+ vec_inf/cli/_utils.py,sha256=23vSbmvNOWY1-W1aOAwYqNDkDDmx-5UVlCiXAtxUZ8A,1057
8
+ vec_inf/cli/_vars.py,sha256=V6DrJs_BuUa4yNcbBSSnMwpcyXwEBsizy3D0ubIg2fA,777
9
+ vec_inf/client/__init__.py,sha256=OLlUJ4kL1R-Kh-nXNbvKlAZ3mtHcnozHprVufkVCNWk,739
10
+ vec_inf/client/_client_vars.py,sha256=qt47xQyZX2YcBtxk5qqmsE6qM5c3m8E2RhRBa2AY068,2619
11
+ vec_inf/client/_exceptions.py,sha256=94Nx_5k1SriJNXzbdnwyXFZolyMutydU08Gsikawzzo,749
12
+ vec_inf/client/_helper.py,sha256=P8A9JHRMzxJRl0dgTuv9xfOluEV3BthUM1KzQlWkR7E,35752
13
+ vec_inf/client/_slurm_script_generator.py,sha256=d2NowdKMQR1lsVI_hw9ObKC3uSk8YJr75ZYRMkvp0RA,13354
14
+ vec_inf/client/_slurm_templates.py,sha256=TAH-wQV4gP2CCwxP3BmShebohtSmlMstlJT9QK6n4Dc,8277
15
+ vec_inf/client/_slurm_vars.py,sha256=9BGA4Y4dGzXez6FG4V53GsMlHb9xOj7W1d7ANjkTvSQ,2723
16
+ vec_inf/client/_utils.py,sha256=aQoPFYUNjp0OGHDdvPu1oec_Eslv0PjtKAiW54WSgAo,12593
17
+ vec_inf/client/api.py,sha256=pkgNE37r7LzYBDjRGAKAh7rhOUMKHGwghJh6Hfb45TI,11681
18
+ vec_inf/client/config.py,sha256=VU4h2iqL0rxYAqGw2HBF_l6QvvSDJy5M79IgX5G2PW4,5830
19
+ vec_inf/client/models.py,sha256=qxLxsVoEhxNkuCmtABqs8In5erkwTZDK0wih7U2_U38,7296
20
+ vec_inf/config/README.md,sha256=TvZOqZyTUaAFr71hC7GVgg6QUw80AXREyq8wS4D-F30,528
21
+ vec_inf/config/environment.yaml,sha256=VBBlHx6zbYnzjwhWcsUI6m5Xqc-2KLPOr1oZ6GUlIWk,602
22
+ vec_inf/config/models.yaml,sha256=vzAOqEu6M_lXput83MAhNzj-aNGSBzjbC6LydOmNqxk,26248
23
+ vec_inf-0.7.0.dist-info/METADATA,sha256=4JtnZxIZA1QXN6m5YsMEUWxb_HjKGgnNBFGf8Pe-IuI,9088
24
+ vec_inf-0.7.0.dist-info/WHEEL,sha256=qtCwoSJWgHk21S1Kb4ihdzI2rlJ1ZKaIurTj_ngOhyQ,87
25
+ vec_inf-0.7.0.dist-info/entry_points.txt,sha256=uNRXjCuJSR2nveEqD3IeMznI9oVI9YLZh5a24cZg6B0,49
26
+ vec_inf-0.7.0.dist-info/licenses/LICENSE,sha256=mq8zeqpvVSF1EsxmydeXcokt8XnEIfSofYn66S2-cJI,1073
27
+ vec_inf-0.7.0.dist-info/RECORD,,
@@ -1,49 +0,0 @@
1
- """Slurm cluster configuration variables."""
2
-
3
- from pathlib import Path
4
-
5
- from typing_extensions import Literal
6
-
7
-
8
- CACHED_CONFIG = Path("/", "model-weights", "vec-inf-shared", "models_latest.yaml")
9
- LD_LIBRARY_PATH = "/scratch/ssd001/pkgs/cudnn-11.7-v8.5.0.96/lib/:/scratch/ssd001/pkgs/cuda-11.7/targets/x86_64-linux/lib/"
10
- SINGULARITY_IMAGE = "/model-weights/vec-inf-shared/vector-inference_latest.sif"
11
- SINGULARITY_LOAD_CMD = "module load singularity-ce/3.8.2"
12
- VLLM_NCCL_SO_PATH = "/vec-inf/nccl/libnccl.so.2.18.1"
13
- MAX_GPUS_PER_NODE = 8
14
- MAX_NUM_NODES = 16
15
- MAX_CPUS_PER_TASK = 128
16
-
17
- QOS = Literal[
18
- "normal",
19
- "m",
20
- "m2",
21
- "m3",
22
- "m4",
23
- "m5",
24
- "long",
25
- "deadline",
26
- "high",
27
- "scavenger",
28
- "llm",
29
- "a100",
30
- ]
31
-
32
- PARTITION = Literal[
33
- "a40",
34
- "a100",
35
- "t4v1",
36
- "t4v2",
37
- "rtx6000",
38
- ]
39
-
40
- DEFAULT_ARGS = {
41
- "cpus_per_task": 16,
42
- "mem_per_node": "64G",
43
- "qos": "m2",
44
- "time": "08:00:00",
45
- "partition": "a40",
46
- "data_type": "auto",
47
- "log_dir": "~/.vec-inf-logs",
48
- "model_weights_parent_dir": "/model-weights",
49
- }
@@ -1,25 +0,0 @@
1
- vec_inf/README.md,sha256=3ocJHfV3kRftXFUCdHw3B-p4QQlXuNqkHnjPPNkCgfM,543
2
- vec_inf/__init__.py,sha256=bHwSIz9lebYuxIemni-lP0h3gwJHVbJnwExQKGJWw_Q,23
3
- vec_inf/find_port.sh,sha256=bGQ6LYSFVSsfDIGatrSg5YvddbZfaPL0R-Bjo4KYD6I,1088
4
- vec_inf/cli/__init__.py,sha256=5XIvGQCOnaGl73XMkwetjC-Ul3xuXGrWDXdYJ3aUzvU,27
5
- vec_inf/cli/_cli.py,sha256=bqyLvFK4Vqoh-wAaUPg50_qYbrW-c9Cl_-YySgVk5_M,9871
6
- vec_inf/cli/_helper.py,sha256=i1QvJeIT3z7me6bv2Vot5c3NY555Dgo3q8iRlxhOlZ4,13047
7
- vec_inf/cli/_utils.py,sha256=23vSbmvNOWY1-W1aOAwYqNDkDDmx-5UVlCiXAtxUZ8A,1057
8
- vec_inf/cli/_vars.py,sha256=V6DrJs_BuUa4yNcbBSSnMwpcyXwEBsizy3D0ubIg2fA,777
9
- vec_inf/client/__init__.py,sha256=OLlUJ4kL1R-Kh-nXNbvKlAZ3mtHcnozHprVufkVCNWk,739
10
- vec_inf/client/_client_vars.py,sha256=eVQjpuASd8beBjAeAbQnMRZM8nCLZMHx-x62BcXVnYA,7163
11
- vec_inf/client/_exceptions.py,sha256=94Nx_5k1SriJNXzbdnwyXFZolyMutydU08Gsikawzzo,749
12
- vec_inf/client/_helper.py,sha256=76OTCroNR5e3e7T2qSV_tkexDaUQsJrs8bFiMJ5NaxU,22718
13
- vec_inf/client/_slurm_script_generator.py,sha256=jFgr2Pu7b_Uqli3DBvxUr9MI1-3TA6wwxg07O2rTwPs,6299
14
- vec_inf/client/_utils.py,sha256=1dB2O1neEhZNk6MJbBybLQm42vsmEevA2TI0F_kGi0o,8796
15
- vec_inf/client/api.py,sha256=TYn4lP5Ene8MEuXWYo6ZbGYw9aPnaMlT32SH7jLCifM,9605
16
- vec_inf/client/config.py,sha256=kOhxoepsvArxRFNlwq1sLDHsxDewLwxRV1VwsL0MrGU,4683
17
- vec_inf/client/models.py,sha256=JZDUMBX3XKOClaq_yJUpDUSgiDy42nT5Dq5bxQWiO2I,5778
18
- vec_inf/client/slurm_vars.py,sha256=lroK41L4gEVVZNxxE3bEpbKsdMwnH79-7iCKd4zWEa4,1069
19
- vec_inf/config/README.md,sha256=OlgnD_Ojei_xLkNyS7dGvYMFUzQFqjVRVw0V-QMk_3g,17863
20
- vec_inf/config/models.yaml,sha256=PR91vOzINVOkAco9S-_VIXQ5Un6ekeoWz2Pj8DMR8LQ,29630
21
- vec_inf-0.6.0.dist-info/METADATA,sha256=-xadTsrAR3tOfPyxTdGB9DLuhWMu_mnp_JF5Aa-1-08,9755
22
- vec_inf-0.6.0.dist-info/WHEEL,sha256=qtCwoSJWgHk21S1Kb4ihdzI2rlJ1ZKaIurTj_ngOhyQ,87
23
- vec_inf-0.6.0.dist-info/entry_points.txt,sha256=uNRXjCuJSR2nveEqD3IeMznI9oVI9YLZh5a24cZg6B0,49
24
- vec_inf-0.6.0.dist-info/licenses/LICENSE,sha256=mq8zeqpvVSF1EsxmydeXcokt8XnEIfSofYn66S2-cJI,1073
25
- vec_inf-0.6.0.dist-info/RECORD,,