vec-inf 0.7.2__py3-none-any.whl → 0.8.0__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
vec_inf/find_port.sh CHANGED
@@ -28,7 +28,16 @@ find_available_port() {
28
28
  local base_port=$2
29
29
  local max_port=$3
30
30
 
31
- for ((port=base_port; port<=max_port; port++)); do
31
+ # Generate shuffled list of ports; fallback to sequential if shuf not present
32
+ if command -v shuf >/dev/null 2>&1; then
33
+ local port_list
34
+ port_list=$(shuf -i "${base_port}-${max_port}")
35
+ else
36
+ local port_list
37
+ port_list=$(seq $base_port $max_port)
38
+ fi
39
+
40
+ for port in $port_list; do
32
41
  if is_port_available $ip $port; then
33
42
  echo $port
34
43
  return
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: vec-inf
3
- Version: 0.7.2
3
+ Version: 0.8.0
4
4
  Summary: Efficient LLM inference on Slurm clusters using vLLM.
5
5
  Author-email: Marshall Wang <marshall.wang@vectorinstitute.ai>
6
6
  License-Expression: MIT
@@ -11,14 +11,14 @@ Requires-Dist: pydantic>=2.10.6
11
11
  Requires-Dist: pyyaml>=6.0.2
12
12
  Requires-Dist: requests>=2.31.0
13
13
  Requires-Dist: rich>=13.7.0
14
- Provides-Extra: dev
15
- Requires-Dist: cupy-cuda12x==12.1.0; extra == 'dev'
16
- Requires-Dist: flashinfer-python>=0.4.0; extra == 'dev'
17
- Requires-Dist: ray[default]>=2.50.0; extra == 'dev'
18
- Requires-Dist: sglang>=0.5.0; extra == 'dev'
19
- Requires-Dist: torch>=2.7.0; extra == 'dev'
20
- Requires-Dist: vllm>=0.10.0; extra == 'dev'
21
- Requires-Dist: xgrammar>=0.1.11; extra == 'dev'
14
+ Provides-Extra: sglang
15
+ Requires-Dist: orjson>=3.11.0; extra == 'sglang'
16
+ Requires-Dist: sgl-kernel>=0.3.0; extra == 'sglang'
17
+ Requires-Dist: sglang>=0.5.5; extra == 'sglang'
18
+ Requires-Dist: torchao>=0.9.0; extra == 'sglang'
19
+ Provides-Extra: vllm
20
+ Requires-Dist: ray[default]>=2.51.0; extra == 'vllm'
21
+ Requires-Dist: vllm>=0.11.2; extra == 'vllm'
22
22
  Description-Content-Type: text/markdown
23
23
 
24
24
  # Vector Inference: Easy inference on Slurm clusters
@@ -30,10 +30,11 @@ Description-Content-Type: text/markdown
30
30
  [![code checks](https://github.com/VectorInstitute/vector-inference/actions/workflows/code_checks.yml/badge.svg)](https://github.com/VectorInstitute/vector-inference/actions/workflows/code_checks.yml)
31
31
  [![docs](https://github.com/VectorInstitute/vector-inference/actions/workflows/docs.yml/badge.svg)](https://github.com/VectorInstitute/vector-inference/actions/workflows/docs.yml)
32
32
  [![codecov](https://codecov.io/github/VectorInstitute/vector-inference/branch/main/graph/badge.svg?token=NI88QSIGAC)](https://app.codecov.io/github/VectorInstitute/vector-inference/tree/main)
33
- [![vLLM](https://img.shields.io/badge/vLLM-0.10.1.1-blue)](https://docs.vllm.ai/en/v0.10.1.1/)
33
+ [![vLLM](https://img.shields.io/badge/vLLM-0.12.0-blue)](https://docs.vllm.ai/en/v0.12.0/)
34
+ [![SGLang](https://img.shields.io/badge/SGLang-0.5.5.post3-blue)](https://docs.sglang.io/index.html)
34
35
  ![GitHub License](https://img.shields.io/github/license/VectorInstitute/vector-inference)
35
36
 
36
- This repository provides an easy-to-use solution to run inference servers on [Slurm](https://slurm.schedmd.com/overview.html)-managed computing clusters using [vLLM](https://docs.vllm.ai/en/latest/). **This package runs natively on the Vector Institute cluster environments**. To adapt to other environments, follow the instructions in [Installation](#installation).
37
+ This repository provides an easy-to-use solution to run inference servers on [Slurm](https://slurm.schedmd.com/overview.html)-managed computing clusters using open-source inference engines ([vLLM](https://docs.vllm.ai/en/v0.12.0/), [SGLang](https://docs.sglang.io/index.html)). **This package runs natively on the Vector Institute cluster environments**. To adapt to other environments, follow the instructions in [Installation](#installation).
37
38
 
38
39
  **NOTE**: Supported models on Killarney are tracked [here](./MODEL_TRACKING.md)
39
40
 
@@ -43,12 +44,12 @@ If you are using the Vector cluster environment, and you don't need any customiz
43
44
  ```bash
44
45
  pip install vec-inf
45
46
  ```
46
- Otherwise, we recommend using the provided [`Dockerfile`](Dockerfile) to set up your own environment with the package. The latest image has `vLLM` version `0.10.1.1`.
47
+ Otherwise, we recommend using the provided [`vllm.Dockerfile`](vllm.Dockerfile) and [`sglang.Dockerfile`](sglang.Dockerfile) to set up your own environment with the package. The built images are available through [Docker Hub](https://hub.docker.com/orgs/vectorinstitute/repositories)
47
48
 
48
49
  If you'd like to use `vec-inf` on your own Slurm cluster, you would need to update the configuration files, there are 3 ways to do it:
49
50
  * Clone the repository and update the `environment.yaml` and the `models.yaml` file in [`vec_inf/config`](vec_inf/config/), then install from source by running `pip install .`.
50
51
  * The package would try to look for cached configuration files in your environment before using the default configuration. The default cached configuration directory path points to `/model-weights/vec-inf-shared`, you would need to create an `environment.yaml` and a `models.yaml` following the format of these files in [`vec_inf/config`](vec_inf/config/).
51
- * The package would also look for an enviroment variable `VEC_INF_CONFIG_DIR`. You can put your `environment.yaml` and `models.yaml` in a directory of your choice and set the enviroment variable `VEC_INF_CONFIG_DIR` to point to that location.
52
+ * [OPTIONAL] The package could also look for an enviroment variable `VEC_INF_CONFIG_DIR`. You can put your `environment.yaml` and `models.yaml` in a directory of your choice and set the enviroment variable `VEC_INF_CONFIG_DIR` to point to that location.
52
53
 
53
54
  ## Usage
54
55
 
@@ -65,18 +66,18 @@ vec-inf launch Meta-Llama-3.1-8B-Instruct
65
66
  ```
66
67
  You should see an output like the following:
67
68
 
68
- <img width="720" alt="launch_image" src="https://github.com/user-attachments/assets/c1e0c60c-cf7a-49ed-a426-fdb38ebf88ee" />
69
+ <img width="720" alt="launch_image" src="./docs/assets/launch.png" />
69
70
 
70
71
  **NOTE**: You can set the required fields in the environment configuration (`environment.yaml`), it's a mapping between required arguments and their corresponding environment variables. On the Vector **Killarney** Cluster environment, the required fields are:
71
72
  * `--account`, `-A`: The Slurm account, this argument can be set to default by setting environment variable `VEC_INF_ACCOUNT`.
72
73
  * `--work-dir`, `-D`: A working directory other than your home directory, this argument can be set to default by seeting environment variable `VEC_INF_WORK_DIR`.
73
74
 
74
- Models that are already supported by `vec-inf` would be launched using the cached configuration (set in [slurm_vars.py](vec_inf/client/slurm_vars.py)) or [default configuration](vec_inf/config/models.yaml). You can override these values by providing additional parameters. Use `vec-inf launch --help` to see the full list of parameters that can be overriden. You can also launch your own custom model as long as the model architecture is [supported by vLLM](https://docs.vllm.ai/en/stable/models/supported_models.html). For detailed instructions on how to customize your model launch, check out the [`launch` command section in User Guide](https://vectorinstitute.github.io/vector-inference/latest/user_guide/#launch-command)
75
+ Models that are already supported by `vec-inf` would be launched using the cached configuration (set in [slurm_vars.py](vec_inf/client/slurm_vars.py)) or [default configuration](vec_inf/config/models.yaml). You can override these values by providing additional parameters. Use `vec-inf launch --help` to see the full list of parameters that can be overriden. You can also launch your own custom model as long as the model architecture is supported by the underlying inference engine. For detailed instructions on how to customize your model launch, check out the [`launch` command section in User Guide](https://vectorinstitute.github.io/vector-inference/latest/user_guide/#launch-command)
75
76
 
76
77
  #### Other commands
77
78
 
78
79
  * `batch-launch`: Launch multiple model inference servers at once, currently ONLY single node models supported,
79
- * `status`: Check the model status by providing its Slurm job ID.
80
+ * `status`: Check the status of all `vec-inf` jobs, or a specific job by providing its job ID.
80
81
  * `metrics`: Streams performance metrics to the console.
81
82
  * `shutdown`: Shutdown a model by providing its Slurm job ID.
82
83
  * `list`: List all available model names, or view the default/cached configuration of a specific model.
@@ -0,0 +1,27 @@
1
+ vec_inf/README.md,sha256=GpKnty9u1b06cPT2Ce_5v0LBucmXOQt6Nl4OJKvjf68,1410
2
+ vec_inf/__init__.py,sha256=bHwSIz9lebYuxIemni-lP0h3gwJHVbJnwExQKGJWw_Q,23
3
+ vec_inf/find_port.sh,sha256=HHx1kg-TIoPZu0u55S4T5jl8MDV4_mnqh4Y7r_quyWw,1358
4
+ vec_inf/cli/__init__.py,sha256=5XIvGQCOnaGl73XMkwetjC-Ul3xuXGrWDXdYJ3aUzvU,27
5
+ vec_inf/cli/_cli.py,sha256=2IOUZXGH9CGU_1tICjPVERtE6c-kqb1DbYOEdtuy_l0,17524
6
+ vec_inf/cli/_helper.py,sha256=h2baXAOugmQK4ZPWxDb4pEcnb17jEZN6GsmqMjarikQ,19700
7
+ vec_inf/cli/_utils.py,sha256=23vSbmvNOWY1-W1aOAwYqNDkDDmx-5UVlCiXAtxUZ8A,1057
8
+ vec_inf/cli/_vars.py,sha256=ujrBtczo6qgsIyJb9greaInFo1gGvxZ6pga9CaBosPg,1147
9
+ vec_inf/client/__init__.py,sha256=OLlUJ4kL1R-Kh-nXNbvKlAZ3mtHcnozHprVufkVCNWk,739
10
+ vec_inf/client/_client_vars.py,sha256=8TleM3nFsmwqOLX0V0y_vvdyz0SyTyd2m_aPt1SjR1Q,3396
11
+ vec_inf/client/_exceptions.py,sha256=94Nx_5k1SriJNXzbdnwyXFZolyMutydU08Gsikawzzo,749
12
+ vec_inf/client/_helper.py,sha256=veii4dKGpBbPpz_X01rHKi2BtkdBjw8RmXpMBBajsyM,41473
13
+ vec_inf/client/_slurm_script_generator.py,sha256=QT36zbdoiADTaUgfe0aYPu0gbN8ctpv_4ElKlBt-Rf0,16217
14
+ vec_inf/client/_slurm_templates.py,sha256=XxIPREQKyF3gT3qGTDFsxx-gduiiVX7rPm-vuAVgjiA,11857
15
+ vec_inf/client/_slurm_vars.py,sha256=nKVYIUPcCKVLBVXzzMqt6b3BGaGIAX_gIyG28wqb_40,3270
16
+ vec_inf/client/_utils.py,sha256=NU_MZeei_RrHXdVNuymEkd-LWtv4qz3yyfn18JBddoM,14513
17
+ vec_inf/client/api.py,sha256=-vazAWvZp0vsn4jB6R-WdUo5eZ5bR-XJqU6r6qOL16A,13596
18
+ vec_inf/client/config.py,sha256=dB1getOXYQk4U4ge-x5qglHJlYZ4PHEaKh7rWdwA1Jg,6206
19
+ vec_inf/client/models.py,sha256=FFWo3XAIlu754FILnBWxCGtLYqLga1vhiCm8i8uZ0pc,7868
20
+ vec_inf/config/README.md,sha256=LrClRwcA-fR8XgmD9TyunuIzrSme4IAwwXmIf9O00zg,532
21
+ vec_inf/config/environment.yaml,sha256=FspYtoQi5fACmb2ludx5WkDNlex2PtFmoHWMZiDWujI,1092
22
+ vec_inf/config/models.yaml,sha256=qQP1GTHnKeGxEOlWqAWvpaBddM6jbR0YOu4X0CENpHE,21069
23
+ vec_inf-0.8.0.dist-info/METADATA,sha256=As1VQZ4ULgxXI1mRGRwHYYs7_qxJriZAqO6n2ZAdYvg,10319
24
+ vec_inf-0.8.0.dist-info/WHEEL,sha256=WLgqFyCfm_KASv4WHyYy0P3pM_m7J5L9k2skdKLirC8,87
25
+ vec_inf-0.8.0.dist-info/entry_points.txt,sha256=uNRXjCuJSR2nveEqD3IeMznI9oVI9YLZh5a24cZg6B0,49
26
+ vec_inf-0.8.0.dist-info/licenses/LICENSE,sha256=mq8zeqpvVSF1EsxmydeXcokt8XnEIfSofYn66S2-cJI,1073
27
+ vec_inf-0.8.0.dist-info/RECORD,,
@@ -1,4 +1,4 @@
1
1
  Wheel-Version: 1.0
2
- Generator: hatchling 1.27.0
2
+ Generator: hatchling 1.28.0
3
3
  Root-Is-Purelib: true
4
4
  Tag: py3-none-any
@@ -1,27 +0,0 @@
1
- vec_inf/README.md,sha256=WyvjbSs5Eh5fp8u66bgOaO3FQKP2U7m_HbLgqTHs_ng,1322
2
- vec_inf/__init__.py,sha256=bHwSIz9lebYuxIemni-lP0h3gwJHVbJnwExQKGJWw_Q,23
3
- vec_inf/find_port.sh,sha256=bGQ6LYSFVSsfDIGatrSg5YvddbZfaPL0R-Bjo4KYD6I,1088
4
- vec_inf/cli/__init__.py,sha256=5XIvGQCOnaGl73XMkwetjC-Ul3xuXGrWDXdYJ3aUzvU,27
5
- vec_inf/cli/_cli.py,sha256=0YfxtPT_Nq5gvIol9eWmw5yW9AT1ghf_E49R9pD7UG4,16213
6
- vec_inf/cli/_helper.py,sha256=0_onclvxxpDTp33ODYc19RbZ2aIhXuMTC9v19q8ZhIo,17473
7
- vec_inf/cli/_utils.py,sha256=23vSbmvNOWY1-W1aOAwYqNDkDDmx-5UVlCiXAtxUZ8A,1057
8
- vec_inf/cli/_vars.py,sha256=V6DrJs_BuUa4yNcbBSSnMwpcyXwEBsizy3D0ubIg2fA,777
9
- vec_inf/client/__init__.py,sha256=OLlUJ4kL1R-Kh-nXNbvKlAZ3mtHcnozHprVufkVCNWk,739
10
- vec_inf/client/_client_vars.py,sha256=1D-bX9dS0-pFImLvgWt2hUnwJiz-VaxuLb2HIfPML8I,2408
11
- vec_inf/client/_exceptions.py,sha256=94Nx_5k1SriJNXzbdnwyXFZolyMutydU08Gsikawzzo,749
12
- vec_inf/client/_helper.py,sha256=hb6m5TLwcGE0grCu5-UCUkWbByV-G5h8gA87Yzct6rk,37170
13
- vec_inf/client/_slurm_script_generator.py,sha256=L6tqn71kNJ2I0xYipFh_ZxIAG8znpXhTpUxTU8LJIa4,13988
14
- vec_inf/client/_slurm_templates.py,sha256=GxVNClkgggoJN2pT1AjK7CQCAErfKRMIs97Vlhxs9u8,9349
15
- vec_inf/client/_slurm_vars.py,sha256=sgP__XhpE1K7pvOzVFmotUXmINYPcOuFP-zGaePT5Iw,2910
16
- vec_inf/client/_utils.py,sha256=_ZBmic0XvJ4vpdIuXDi6KO5iL2rbhIpFQT01EWGItN4,14296
17
- vec_inf/client/api.py,sha256=lkVWCme-HmMJMqp8JbtjkBVL_MSPsCC_IBL9FBw3Um8,12011
18
- vec_inf/client/config.py,sha256=VU4h2iqL0rxYAqGw2HBF_l6QvvSDJy5M79IgX5G2PW4,5830
19
- vec_inf/client/models.py,sha256=jGNPOj1uPPBV7xdGy3HFv2ZwpJOGCsU8qm7pE2Rnnes,7498
20
- vec_inf/config/README.md,sha256=TvZOqZyTUaAFr71hC7GVgg6QUw80AXREyq8wS4D-F30,528
21
- vec_inf/config/environment.yaml,sha256=oEDp85hUERJO9NNn4wYhcgunnmkln50GNHDzG_3isMw,678
22
- vec_inf/config/models.yaml,sha256=PSDR29zI8xld32Vm6dhgCIRHPEkBhwQx7-d_uFlEAM8,24764
23
- vec_inf-0.7.2.dist-info/METADATA,sha256=ljs9hao8q4igLERrjGL5u1vZ_n7DMrr8XnBHzybPE2Y,10099
24
- vec_inf-0.7.2.dist-info/WHEEL,sha256=qtCwoSJWgHk21S1Kb4ihdzI2rlJ1ZKaIurTj_ngOhyQ,87
25
- vec_inf-0.7.2.dist-info/entry_points.txt,sha256=uNRXjCuJSR2nveEqD3IeMznI9oVI9YLZh5a24cZg6B0,49
26
- vec_inf-0.7.2.dist-info/licenses/LICENSE,sha256=mq8zeqpvVSF1EsxmydeXcokt8XnEIfSofYn66S2-cJI,1073
27
- vec_inf-0.7.2.dist-info/RECORD,,