libcuopt-cu13 25.10.0__tar.gz
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
|
@@ -0,0 +1,169 @@
|
|
|
1
|
+
Metadata-Version: 2.1
|
|
2
|
+
Name: libcuopt-cu13
|
|
3
|
+
Version: 25.10.0
|
|
4
|
+
Summary: cuOpt - GPU Optimizer (C++)
|
|
5
|
+
Author: NVIDIA Corporation
|
|
6
|
+
License: Apache-2.0
|
|
7
|
+
Classifier: Intended Audience :: Developers
|
|
8
|
+
Classifier: Topic :: Database
|
|
9
|
+
Classifier: Topic :: Scientific/Engineering
|
|
10
|
+
Classifier: License :: OSI Approved :: Apache Software License
|
|
11
|
+
Classifier: Programming Language :: C++
|
|
12
|
+
Classifier: Environment :: GPU :: NVIDIA CUDA
|
|
13
|
+
Classifier: Programming Language :: Python :: 3.10
|
|
14
|
+
Classifier: Programming Language :: Python :: 3.11
|
|
15
|
+
Classifier: Programming Language :: Python :: 3.12
|
|
16
|
+
Classifier: Programming Language :: Python :: 3.13
|
|
17
|
+
Project-URL: Homepage, https://docs.nvidia.com/cuopt/introduction.html
|
|
18
|
+
Project-URL: Source, https://github.com/nvidia/cuopt
|
|
19
|
+
Requires-Python: >=3.10
|
|
20
|
+
Requires-Dist: cuda-toolkit[cublas,cudart,curand,cusolver,cusparse,nvtx]==13.*
|
|
21
|
+
Requires-Dist: cuopt-mps-parser==25.10.*
|
|
22
|
+
Requires-Dist: librmm-cu13==25.10.*
|
|
23
|
+
Requires-Dist: nvidia-cudss-cu13
|
|
24
|
+
Requires-Dist: rapids-logger==0.1.*
|
|
25
|
+
Description-Content-Type: text/markdown
|
|
26
|
+
|
|
27
|
+
# cuOpt - GPU accelerated Optimization Engine
|
|
28
|
+
|
|
29
|
+
[](https://github.com/NVIDIA/cuopt/actions/workflows/build.yaml)
|
|
30
|
+
|
|
31
|
+
NVIDIA® cuOpt™ is a GPU-accelerated optimization engine that excels in mixed integer linear programming (MILP), linear programming (LP), and vehicle routing problems (VRP). It enables near real-time solutions for large-scale challenges with millions of variables and constraints, offering
|
|
32
|
+
easy integration into existing solvers and seamless deployment across hybrid and multi-cloud environments.
|
|
33
|
+
|
|
34
|
+
The core engine is written in C++ and wrapped with a C API, Python API and Server API.
|
|
35
|
+
|
|
36
|
+
For the latest stable version ensure you are on the `main` branch.
|
|
37
|
+
|
|
38
|
+
## Supported APIs
|
|
39
|
+
|
|
40
|
+
cuOpt supports the following APIs:
|
|
41
|
+
|
|
42
|
+
- C API support
|
|
43
|
+
- Linear Programming (LP)
|
|
44
|
+
- Mixed Integer Linear Programming (MILP)
|
|
45
|
+
- C++ API support
|
|
46
|
+
- cuOpt is written in C++ and includes a native C++ API. However, we do not provide documentation for the C++ API at this time. We anticipate that the C++ API will change significantly in the future. Use it at your own risk.
|
|
47
|
+
- Python support
|
|
48
|
+
- Routing (TSP, VRP, and PDP)
|
|
49
|
+
- Linear Programming (LP) and Mixed Integer Linear Programming (MILP)
|
|
50
|
+
- cuOpt includes a Python API that is used as the backend of the cuOpt server. However, we do not provide documentation for the Python API at this time. We suggest using cuOpt server to access cuOpt via Python. We anticipate that the Python API will change significantly in the future. Use it at your own risk.
|
|
51
|
+
- Server support
|
|
52
|
+
- Linear Programming (LP)
|
|
53
|
+
- Mixed Integer Linear Programming (MILP)
|
|
54
|
+
- Routing (TSP, VRP, and PDP)
|
|
55
|
+
|
|
56
|
+
This repo is also hosted as a [COIN-OR](http://github.com/coin-or/cuopt/) project.
|
|
57
|
+
|
|
58
|
+
## Installation
|
|
59
|
+
|
|
60
|
+
### CUDA/GPU requirements
|
|
61
|
+
|
|
62
|
+
* CUDA 12.0+ or CUDA 13.0+
|
|
63
|
+
* NVIDIA driver >= 525.60.13 (Linux) and >= 527.41 (Windows)
|
|
64
|
+
* Volta architecture or better (Compute Capability >=7.0)
|
|
65
|
+
|
|
66
|
+
### Python requirements
|
|
67
|
+
|
|
68
|
+
* Python >=3.10, <=3.13
|
|
69
|
+
|
|
70
|
+
### OS requirements
|
|
71
|
+
|
|
72
|
+
* Only Linux is supported and Windows via WSL2
|
|
73
|
+
* x86_64 (64-bit)
|
|
74
|
+
* aarch64 (64-bit)
|
|
75
|
+
|
|
76
|
+
Note: WSL2 is tested to run cuOpt, but not for building.
|
|
77
|
+
|
|
78
|
+
More details on system requirements can be found [here](https://docs.nvidia.com/cuopt/user-guide/latest/system-requirements.html)
|
|
79
|
+
|
|
80
|
+
### Pip
|
|
81
|
+
|
|
82
|
+
Pip wheels are easy to install and easy to configure. Users with existing workflows who uses pip as base to build their workflows can use pip to install cuOpt.
|
|
83
|
+
|
|
84
|
+
cuOpt can be installed via `pip` from the NVIDIA Python Package Index.
|
|
85
|
+
Be sure to select the appropriate cuOpt package depending
|
|
86
|
+
on the major version of CUDA available in your environment:
|
|
87
|
+
|
|
88
|
+
For CUDA 12.x:
|
|
89
|
+
|
|
90
|
+
```bash
|
|
91
|
+
pip install \
|
|
92
|
+
--extra-index-url=https://pypi.nvidia.com \
|
|
93
|
+
nvidia-cuda-runtime-cu12=12.9.* \
|
|
94
|
+
cuopt-server-cu12==25.10.* cuopt-sh-client==25.10.*
|
|
95
|
+
```
|
|
96
|
+
|
|
97
|
+
Development wheels are available as nightlies, please update `--extra-index-url` to `https://pypi.anaconda.org/rapidsai-wheels-nightly/simple/` to install latest nightly packages.
|
|
98
|
+
```bash
|
|
99
|
+
pip install --pre \
|
|
100
|
+
--extra-index-url=https://pypi.nvidia.com \
|
|
101
|
+
--extra-index-url=https://pypi.anaconda.org/rapidsai-wheels-nightly/simple/ \
|
|
102
|
+
cuopt-server-cu12==25.10.* cuopt-sh-client==25.10.*
|
|
103
|
+
```
|
|
104
|
+
|
|
105
|
+
For CUDA 13.x:
|
|
106
|
+
|
|
107
|
+
```bash
|
|
108
|
+
pip install \
|
|
109
|
+
--extra-index-url=https://pypi.nvidia.com \
|
|
110
|
+
cuopt-server-cu13==25.10.* cuopt-sh-client==25.10.*
|
|
111
|
+
```
|
|
112
|
+
|
|
113
|
+
Development wheels are available as nightlies, please update `--extra-index-url` to `https://pypi.anaconda.org/rapidsai-wheels-nightly/simple/` to install latest nightly packages.
|
|
114
|
+
```bash
|
|
115
|
+
pip install --pre \
|
|
116
|
+
--extra-index-url=https://pypi.nvidia.com \
|
|
117
|
+
--extra-index-url=https://pypi.anaconda.org/rapidsai-wheels-nightly/simple/ \
|
|
118
|
+
cuopt-server-cu13==25.10.* cuopt-sh-client==25.10.*
|
|
119
|
+
```
|
|
120
|
+
|
|
121
|
+
|
|
122
|
+
### Conda
|
|
123
|
+
|
|
124
|
+
cuOpt can be installed with conda (via [miniforge](https://github.com/conda-forge/miniforge)):
|
|
125
|
+
|
|
126
|
+
All other dependencies are installed automatically when `cuopt-server` and `cuopt-sh-client` are installed.
|
|
127
|
+
|
|
128
|
+
```bash
|
|
129
|
+
conda install -c rapidsai -c conda-forge -c nvidia cuopt-server=25.10.* cuopt-sh-client=25.10.*
|
|
130
|
+
```
|
|
131
|
+
|
|
132
|
+
We also provide [nightly conda packages](https://anaconda.org/rapidsai-nightly) built from the HEAD
|
|
133
|
+
of our latest development branch. Just replace `-c rapidsai` with `-c rapidsai-nightly`.
|
|
134
|
+
|
|
135
|
+
### Container
|
|
136
|
+
|
|
137
|
+
Users can pull the cuOpt container from the NVIDIA container registry.
|
|
138
|
+
|
|
139
|
+
```bash
|
|
140
|
+
# For CUDA 12.x
|
|
141
|
+
docker pull nvidia/cuopt:latest-cuda12.9-py3.13
|
|
142
|
+
|
|
143
|
+
# For CUDA 13.x
|
|
144
|
+
docker pull nvidia/cuopt:latest-cuda13.0-py3.13
|
|
145
|
+
```
|
|
146
|
+
|
|
147
|
+
Note: The ``latest`` tag is the latest stable release of cuOpt. If you want to use a specific version, you can use the ``<version>-cuda12.9-py3.13`` or ``<version>-cuda13.0-py3.13`` tag. For example, to use cuOpt 25.10.0, you can use the ``25.10.0-cuda12.9-py3.13`` or ``25.10.0-cuda13.0-py3.13`` tag. Please refer to `cuOpt dockerhub page <https://hub.docker.com/r/nvidia/cuopt/tags>`_ for the list of available tags.
|
|
148
|
+
|
|
149
|
+
More information about the cuOpt container can be found [here](https://docs.nvidia.com/cuopt/user-guide/latest/cuopt-server/quick-start.html#container-from-docker-hub).
|
|
150
|
+
|
|
151
|
+
Users who are using cuOpt for quick testing or research can use the cuOpt container. Alternatively, users who are planning to plug cuOpt as a service in their workflow can quickly start with the cuOpt container. But users are required to build security layers around the service to safeguard the service from untrusted users.
|
|
152
|
+
|
|
153
|
+
## Build from Source and Test
|
|
154
|
+
|
|
155
|
+
Please see our [guide for building cuOpt from source](CONTRIBUTING.md#setting-up-your-build-environment). This will be helpful if users want to add new features or fix bugs for cuOpt. This would also be very helpful in case users want to customize cuOpt for their own use cases which require changes to the cuOpt source code.
|
|
156
|
+
|
|
157
|
+
## Contributing Guide
|
|
158
|
+
|
|
159
|
+
Review the [CONTRIBUTING.md](CONTRIBUTING.md) file for information on how to contribute code and issues to the project.
|
|
160
|
+
|
|
161
|
+
## Resources
|
|
162
|
+
|
|
163
|
+
- [libcuopt (C) documentation](https://docs.nvidia.com/cuopt/user-guide/latest/cuopt-c/index.html)
|
|
164
|
+
- [cuopt (Python) documentation](https://docs.nvidia.com/cuopt/user-guide/latest/cuopt-python/index.html)
|
|
165
|
+
- [cuopt (Server) documentation](https://docs.nvidia.com/cuopt/user-guide/latest/cuopt-server/index.html)
|
|
166
|
+
- [Examples and Notebooks](https://github.com/NVIDIA/cuopt-examples)
|
|
167
|
+
- [Test cuopt with NVIDIA Launchable](https://brev.nvidia.com/launchable/deploy?launchableID=env-2qIG6yjGKDtdMSjXHcuZX12mDNJ): Examples notebooks are pulled and hosted on [NVIDIA Launchable](https://docs.nvidia.com/brev/latest/).
|
|
168
|
+
- [Test cuopt on Google Colab](https://colab.research.google.com/github/nvidia/cuopt-examples/): Examples notebooks can be opened in Google Colab. Please note that you need to choose a `Runtime` as `GPU` in order to run the notebooks.
|
|
169
|
+
- [cuOpt Examples and Tutorial Videos](https://docs.nvidia.com/cuopt/user-guide/latest/resources.html#cuopt-examples-and-tutorials-videos)
|
|
@@ -0,0 +1,10 @@
|
|
|
1
|
+
# SPDX-FileCopyrightText: Copyright (c) 2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
|
|
2
|
+
# SPDX-License-Identifier: Apache-2
|
|
3
|
+
|
|
4
|
+
[build-system]
|
|
5
|
+
requires = ["wheel-stub"]
|
|
6
|
+
build-backend = "wheel_stub.buildapi"
|
|
7
|
+
|
|
8
|
+
[tool.wheel_stub]
|
|
9
|
+
index_url = "https://pypi.nvidia.com/"
|
|
10
|
+
include_cuda_debuginfo = true
|