moospread 0.1.0__tar.gz → 0.1.1__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (72) hide show
  1. moospread-0.1.1/PKG-INFO +141 -0
  2. moospread-0.1.1/README.md +146 -0
  3. moospread-0.1.1/README_PYPI.md +109 -0
  4. moospread-0.1.1/moospread.egg-info/PKG-INFO +141 -0
  5. {moospread-0.1.0 → moospread-0.1.1}/moospread.egg-info/SOURCES.txt +1 -0
  6. {moospread-0.1.0 → moospread-0.1.1}/pyproject.toml +2 -2
  7. moospread-0.1.0/PKG-INFO +0 -75
  8. moospread-0.1.0/README.md +0 -43
  9. moospread-0.1.0/moospread.egg-info/PKG-INFO +0 -75
  10. {moospread-0.1.0 → moospread-0.1.1}/LICENSE +0 -0
  11. {moospread-0.1.0 → moospread-0.1.1}/moospread/__init__.py +0 -0
  12. {moospread-0.1.0 → moospread-0.1.1}/moospread/core.py +0 -0
  13. {moospread-0.1.0 → moospread-0.1.1}/moospread/problem.py +0 -0
  14. {moospread-0.1.0 → moospread-0.1.1}/moospread/tasks/__init__.py +0 -0
  15. {moospread-0.1.0 → moospread-0.1.1}/moospread/tasks/dtlz_torch.py +0 -0
  16. {moospread-0.1.0 → moospread-0.1.1}/moospread/tasks/mw_torch.py +0 -0
  17. {moospread-0.1.0 → moospread-0.1.1}/moospread/tasks/re_torch.py +0 -0
  18. {moospread-0.1.0 → moospread-0.1.1}/moospread/tasks/zdt_torch.py +0 -0
  19. {moospread-0.1.0 → moospread-0.1.1}/moospread/utils/__init__.py +0 -0
  20. {moospread-0.1.0 → moospread-0.1.1}/moospread/utils/constraint_utils/__init__.py +0 -0
  21. {moospread-0.1.0 → moospread-0.1.1}/moospread/utils/constraint_utils/gradient.py +0 -0
  22. {moospread-0.1.0 → moospread-0.1.1}/moospread/utils/constraint_utils/mgda_core.py +0 -0
  23. {moospread-0.1.0 → moospread-0.1.1}/moospread/utils/constraint_utils/pmgda_solver.py +0 -0
  24. {moospread-0.1.0 → moospread-0.1.1}/moospread/utils/constraint_utils/prefs.py +0 -0
  25. {moospread-0.1.0 → moospread-0.1.1}/moospread/utils/ditmoo.py +0 -0
  26. {moospread-0.1.0 → moospread-0.1.1}/moospread/utils/lhs.py +0 -0
  27. {moospread-0.1.0 → moospread-0.1.1}/moospread/utils/misc.py +0 -0
  28. {moospread-0.1.0 → moospread-0.1.1}/moospread/utils/mobo_utils/__init__.py +0 -0
  29. {moospread-0.1.0 → moospread-0.1.1}/moospread/utils/mobo_utils/evolution/__init__.py +0 -0
  30. {moospread-0.1.0 → moospread-0.1.1}/moospread/utils/mobo_utils/evolution/dom.py +0 -0
  31. {moospread-0.1.0 → moospread-0.1.1}/moospread/utils/mobo_utils/evolution/norm.py +0 -0
  32. {moospread-0.1.0 → moospread-0.1.1}/moospread/utils/mobo_utils/evolution/utils.py +0 -0
  33. {moospread-0.1.0 → moospread-0.1.1}/moospread/utils/mobo_utils/learning/__init__.py +0 -0
  34. {moospread-0.1.0 → moospread-0.1.1}/moospread/utils/mobo_utils/learning/model.py +0 -0
  35. {moospread-0.1.0 → moospread-0.1.1}/moospread/utils/mobo_utils/learning/model_init.py +0 -0
  36. {moospread-0.1.0 → moospread-0.1.1}/moospread/utils/mobo_utils/learning/model_update.py +0 -0
  37. {moospread-0.1.0 → moospread-0.1.1}/moospread/utils/mobo_utils/learning/prediction.py +0 -0
  38. {moospread-0.1.0 → moospread-0.1.1}/moospread/utils/mobo_utils/learning/utils.py +0 -0
  39. {moospread-0.1.0 → moospread-0.1.1}/moospread/utils/mobo_utils/lhs_for_mobo.py +0 -0
  40. {moospread-0.1.0 → moospread-0.1.1}/moospread/utils/mobo_utils/mobo/__init__.py +0 -0
  41. {moospread-0.1.0 → moospread-0.1.1}/moospread/utils/mobo_utils/mobo/acquisition.py +0 -0
  42. {moospread-0.1.0 → moospread-0.1.1}/moospread/utils/mobo_utils/mobo/algorithms.py +0 -0
  43. {moospread-0.1.0 → moospread-0.1.1}/moospread/utils/mobo_utils/mobo/factory.py +0 -0
  44. {moospread-0.1.0 → moospread-0.1.1}/moospread/utils/mobo_utils/mobo/mobo.py +0 -0
  45. {moospread-0.1.0 → moospread-0.1.1}/moospread/utils/mobo_utils/mobo/selection.py +0 -0
  46. {moospread-0.1.0 → moospread-0.1.1}/moospread/utils/mobo_utils/mobo/solver/__init__.py +0 -0
  47. {moospread-0.1.0 → moospread-0.1.1}/moospread/utils/mobo_utils/mobo/solver/moead.py +0 -0
  48. {moospread-0.1.0 → moospread-0.1.1}/moospread/utils/mobo_utils/mobo/solver/nsga2.py +0 -0
  49. {moospread-0.1.0 → moospread-0.1.1}/moospread/utils/mobo_utils/mobo/solver/parego/__init__.py +0 -0
  50. {moospread-0.1.0 → moospread-0.1.1}/moospread/utils/mobo_utils/mobo/solver/parego/parego.py +0 -0
  51. {moospread-0.1.0 → moospread-0.1.1}/moospread/utils/mobo_utils/mobo/solver/parego/utils.py +0 -0
  52. {moospread-0.1.0 → moospread-0.1.1}/moospread/utils/mobo_utils/mobo/solver/pareto_discovery/__init__.py +0 -0
  53. {moospread-0.1.0 → moospread-0.1.1}/moospread/utils/mobo_utils/mobo/solver/pareto_discovery/buffer.py +0 -0
  54. {moospread-0.1.0 → moospread-0.1.1}/moospread/utils/mobo_utils/mobo/solver/pareto_discovery/pareto_discovery.py +0 -0
  55. {moospread-0.1.0 → moospread-0.1.1}/moospread/utils/mobo_utils/mobo/solver/pareto_discovery/utils.py +0 -0
  56. {moospread-0.1.0 → moospread-0.1.1}/moospread/utils/mobo_utils/mobo/solver/solver.py +0 -0
  57. {moospread-0.1.0 → moospread-0.1.1}/moospread/utils/mobo_utils/mobo/surrogate_model/__init__.py +0 -0
  58. {moospread-0.1.0 → moospread-0.1.1}/moospread/utils/mobo_utils/mobo/surrogate_model/base.py +0 -0
  59. {moospread-0.1.0 → moospread-0.1.1}/moospread/utils/mobo_utils/mobo/surrogate_model/gaussian_process.py +0 -0
  60. {moospread-0.1.0 → moospread-0.1.1}/moospread/utils/mobo_utils/mobo/surrogate_model/thompson_sampling.py +0 -0
  61. {moospread-0.1.0 → moospread-0.1.1}/moospread/utils/mobo_utils/mobo/surrogate_problem.py +0 -0
  62. {moospread-0.1.0 → moospread-0.1.1}/moospread/utils/mobo_utils/mobo/transformation.py +0 -0
  63. {moospread-0.1.0 → moospread-0.1.1}/moospread/utils/mobo_utils/mobo/utils.py +0 -0
  64. {moospread-0.1.0 → moospread-0.1.1}/moospread/utils/mobo_utils/spread_mobo_utils.py +0 -0
  65. {moospread-0.1.0 → moospread-0.1.1}/moospread/utils/offline_utils/__init__.py +0 -0
  66. {moospread-0.1.0 → moospread-0.1.1}/moospread/utils/offline_utils/handle_task.py +0 -0
  67. {moospread-0.1.0 → moospread-0.1.1}/moospread/utils/offline_utils/proxies.py +0 -0
  68. {moospread-0.1.0 → moospread-0.1.1}/moospread/utils/spread_utils.py +0 -0
  69. {moospread-0.1.0 → moospread-0.1.1}/moospread.egg-info/dependency_links.txt +0 -0
  70. {moospread-0.1.0 → moospread-0.1.1}/moospread.egg-info/requires.txt +0 -0
  71. {moospread-0.1.0 → moospread-0.1.1}/moospread.egg-info/top_level.txt +0 -0
  72. {moospread-0.1.0 → moospread-0.1.1}/setup.cfg +0 -0
@@ -0,0 +1,141 @@
1
+ Metadata-Version: 2.4
2
+ Name: moospread
3
+ Version: 0.1.1
4
+ Summary: Sampling-based Pareto front Refinement via Efficient Adaptive Diffusion
5
+ Author-email: Sedjro Salomon Hotegni <salomon.hotegni@aims.ac.rw>
6
+ Maintainer-email: Sedjro Salomon Hotegni <salomon.hotegni@tu-dortmund.de>
7
+ License: MIT License
8
+
9
+ Copyright (c) 2026 Sedjro Salomon Hotegni
10
+
11
+ Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
12
+
13
+ The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
16
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
17
+
18
+ Project-URL: bugs, https://github.com/safe-autonomous-systems/moo-spread/issues
19
+ Project-URL: changelog, https://github.com/safe-autonomous-systems/moo-spread/blob/main/changelog.md
20
+ Project-URL: Homepage, https://github.com/safe-autonomous-systems/moo-spread
21
+ Requires-Python: >=3.8
22
+ Description-Content-Type: text/markdown
23
+ License-File: LICENSE
24
+ Requires-Dist: numpy
25
+ Requires-Dist: torch
26
+ Provides-Extra: dev
27
+ Requires-Dist: coverage; extra == "dev"
28
+ Requires-Dist: mypy; extra == "dev"
29
+ Requires-Dist: pytest; extra == "dev"
30
+ Requires-Dist: ruff; extra == "dev"
31
+ Dynamic: license-file
32
+
33
+ <p align="center">
34
+ <img src="https://github.com/safe-autonomous-systems/moo-spread/raw/main/images/logo_well_spread.png"
35
+ alt="moospread logo" width="300">
36
+ </p>
37
+ <!--
38
+ <p align="center">
39
+ <a href="https://pypi.org/project/moospread/"><img src="https://img.shields.io/pypi/v/advermorel.svg" alt="PyPI version"></a>
40
+ </p>
41
+ -->
42
+
43
+ # SPREAD: Sampling-based Pareto front Refinement via Efficient Adaptive Diffusion
44
+
45
+ > SPREAD is a novel sampling-based approach for multi-objective optimization that leverages diffusion models to efficiently refine and generate well-spread Pareto front approximations. It combines the expressiveness of diffusion models with multi-objective optimization principles to achieve both high convergence to the Pareto front and excellent diversity across the objective space. SPREAD demonstrates competitive performance against state-of-the-art methods while providing a flexible framework for different optimization contexts.
46
+
47
+ ## 🚀 Getting Started
48
+
49
+ ### Installation
50
+
51
+ ```python
52
+ conda create -n moospread python=3.11
53
+ conda activate moospread
54
+ pip install moospread
55
+ # To install CUDA‐enabled PyTorch, run (or visit: https://pytorch.org/get-started/locally/):
56
+ pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
57
+ ```
58
+ Or, to install the latest code from GitHub:
59
+ ```python
60
+ conda create -n moospread python=3.11
61
+ conda activate moospread
62
+ git clone https://github.com/safe-autonomous-systems/moo-spread.git
63
+ cd moo-spread
64
+ pip install -e .
65
+ # To install CUDA‐enabled PyTorch, run (or visit: https://pytorch.org/get-started/locally/):
66
+ pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
67
+ ```
68
+ ### Basic usage
69
+ This example shows how to solve a standard multi-objective optimization benchmark (ZDT2) using the **SPREAD** solver.
70
+
71
+ ```python
72
+ import numpy as np
73
+ import torch
74
+
75
+ # Import the SPREAD solver
76
+ from moospread import SPREAD
77
+
78
+ # Import a test problem
79
+ from moospread.tasks import ZDT2
80
+
81
+ # Define the problem
82
+ n_var = 30
83
+ problem = ZDT2(n_var=n_var)
84
+
85
+ # Initialize the SPREAD solver
86
+ solver = SPREAD(
87
+ problem,
88
+ data_size=10000,
89
+ timesteps=5000,
90
+ num_epochs=1000,
91
+ train_tol=100,
92
+ num_blocks=3,
93
+ validation_split=0.1,
94
+ mode="online",
95
+ seed=2026,
96
+ verbose=True
97
+ )
98
+
99
+ # Solve the problem
100
+ results = solver.solve(
101
+ num_points_sample=200,
102
+ strict_guidance=False,
103
+ rho_scale_gamma=0.9,
104
+ nu_t=10.0,
105
+ eta_init=0.9,
106
+ num_inner_steps=10,
107
+ lr_inner=0.9,
108
+ free_initial_h=True,
109
+ use_sigma_rep=False,
110
+ kernel_sigma_rep=0.01,
111
+ iterative_plot=True,
112
+ plot_period=10,
113
+ max_backtracks=25,
114
+ save_results=True,
115
+ samples_store_path="./samples_dir/",
116
+ images_store_path="./images_dir/"
117
+ )
118
+ ```
119
+
120
+ This will train a diffusion-based multi-objective solver, approximate the Pareto front of the ZDT2 problem, and store generated samples and plots in the specified directories.
121
+
122
+ ---
123
+
124
+ <!--
125
+ ### 📚 Next steps
126
+
127
+ For more advanced examples (offline mode, Bayesian mode, custom problems), see the full [documentation](https://moospread.readthedocs.io/en/latest/).
128
+ -->
129
+
130
+ ## Citation
131
+ If you find `moospread` useful in your research, please consider citing:
132
+ ```
133
+ @inproceedings{
134
+ hotegni2026spread,
135
+ title={{SPREAD}: Sampling-based Pareto front Refinement via Efficient Adaptive Diffusion},
136
+ author={Hotegni, Sedjro Salomon and Peitz, Sebastian},
137
+ booktitle={The Fourteenth International Conference on Learning Representations},
138
+ year={2026},
139
+ url={https://openreview.net/forum?id=4731mIqv89}
140
+ }
141
+ ```
@@ -0,0 +1,146 @@
1
+ <p align="center">
2
+ <img src="/images/logo_well_spread.png" style="width: 30%; height: auto;">
3
+ </p>
4
+ <a href="https://pypi.org/project/moospread/"><img src="https://img.shields.io/pypi/v/advermorel.svg" alt="PyPI version"></a>
5
+
6
+ # [ICLR 2026] SPREAD: Sampling-based Pareto front Refinement via Efficient Adaptive Diffusion
7
+
8
+ <!--
9
+ <p align="center">
10
+ <a href="https://pypi.org/project/moospread/"><img src="https://img.shields.io/pypi/v/advermorel.svg" alt="PyPI version"></a>
11
+ </p>
12
+ -->
13
+
14
+ > SPREAD is a novel sampling-based approach for multi-objective optimization that leverages diffusion models to efficiently refine and generate well-spread Pareto front approximations. It combines the expressiveness of diffusion models with multi-objective optimization principles to achieve both high convergence to the Pareto front and excellent diversity across the objective space. SPREAD demonstrates competitive performance against state-of-the-art methods while providing a flexible framework for different optimization contexts.
15
+
16
+ ## 🚀 Getting Started
17
+
18
+ ### Installation
19
+
20
+ ```python
21
+ conda create -n moospread python=3.11
22
+ conda activate moospread
23
+ pip install moospread
24
+ # To install CUDA‐enabled PyTorch, run (or visit: https://pytorch.org/get-started/locally/):
25
+ pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
26
+ ```
27
+ Or, to install the latest code from GitHub:
28
+ ```python
29
+ conda create -n moospread python=3.11
30
+ conda activate moospread
31
+ git clone https://github.com/safe-autonomous-systems/moo-spread.git
32
+ cd moo-spread
33
+ pip install -e .
34
+ # To install CUDA‐enabled PyTorch, run (or visit: https://pytorch.org/get-started/locally/):
35
+ pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
36
+ ```
37
+ ### Basic usage
38
+ This example shows how to solve a standard multi-objective optimization benchmark (ZDT2) using the **SPREAD** solver.
39
+
40
+ ```python
41
+ import numpy as np
42
+ import torch
43
+
44
+ # Import the SPREAD solver
45
+ from moospread import SPREAD
46
+
47
+ # Import a test problem
48
+ from moospread.tasks import ZDT2
49
+
50
+ # Define the problem
51
+ n_var = 30
52
+ problem = ZDT2(n_var=n_var)
53
+
54
+ # Initialize the SPREAD solver
55
+ solver = SPREAD(
56
+ problem,
57
+ data_size=10000,
58
+ timesteps=5000,
59
+ num_epochs=1000,
60
+ train_tol=100,
61
+ num_blocks=3,
62
+ validation_split=0.1,
63
+ mode="online",
64
+ seed=2026,
65
+ verbose=True
66
+ )
67
+
68
+ # Solve the problem
69
+ results = solver.solve(
70
+ num_points_sample=200,
71
+ strict_guidance=False,
72
+ rho_scale_gamma=0.9,
73
+ nu_t=10.0,
74
+ eta_init=0.9,
75
+ num_inner_steps=10,
76
+ lr_inner=0.9,
77
+ free_initial_h=True,
78
+ use_sigma_rep=False,
79
+ kernel_sigma_rep=0.01,
80
+ iterative_plot=True,
81
+ plot_period=10,
82
+ max_backtracks=25,
83
+ save_results=True,
84
+ samples_store_path="./samples_dir/",
85
+ images_store_path="./images_dir/"
86
+ )
87
+ ```
88
+
89
+ This will train a diffusion-based multi-objective solver, approximate the Pareto front of the ZDT2 problem, and store generated samples and plots in the specified directories.
90
+
91
+ ---
92
+
93
+ <!--
94
+ ### 📚 Next steps
95
+
96
+ For more advanced examples (offline mode, Bayesian mode, custom problems), see the full [documentation](https://moospread.readthedocs.io/en/latest/).
97
+ -->
98
+
99
+ ## 🔬 Experiments
100
+
101
+ All experiment code is contained in the `/experiments` directory:
102
+
103
+ * **Online setting:** `/experiments/spread/`
104
+ * **Offline setting:** `/experiments/spread_offline/`
105
+ * **Bayesian setting:** `/experiments/spread_bayesian/`
106
+
107
+ The following Jupyter notebooks reproduce the plots shown in our paper:
108
+
109
+ * `/experiments/spread/notebook_online_spread.ipynb`
110
+ * `/experiments/spread_bayesian/notebook_bayesian_spread.ipynb`
111
+
112
+ ### Environment Setup
113
+
114
+ Each experiment setting comes with its own environment file located in the corresponding folder:
115
+
116
+ - Online setting: `experiments/spread/spread.yml`
117
+ - Offline setting: `experiments/spread_offline/spread_off.yml`
118
+ - Bayesian setting: `experiments/spread_bayesian/spread_bay.yml`
119
+
120
+ To create the environment for a given setting, run:
121
+ ```bash
122
+ conda env create -f experiments/<folder>/<env_name>.yml
123
+ conda activate <env_name>
124
+ ```
125
+ For example, to run the online experiments:
126
+ ```bash
127
+ conda env create -f experiments/spread/spread.yml
128
+ conda activate spread
129
+ ```
130
+ The offline experiments require installing **Off-MOO-Bench** from the authors’ public repository: https://github.com/lamda-bbo/offline-moo. The datasets should be downloaded into the folder: `experiments/spread_offline/offline_moo/data/`.
131
+
132
+ ## 📃 Citation
133
+ If you find `moospread` useful in your research, please consider citing:
134
+ ```
135
+ @inproceedings{
136
+ hotegni2026spread,
137
+ title={{SPREAD}: Sampling-based Pareto front Refinement via Efficient Adaptive Diffusion},
138
+ author={Hotegni, Sedjro Salomon and Peitz, Sebastian},
139
+ booktitle={The Fourteenth International Conference on Learning Representations},
140
+ year={2026},
141
+ url={https://openreview.net/forum?id=4731mIqv89}
142
+ }
143
+ ```
144
+
145
+
146
+
@@ -0,0 +1,109 @@
1
+ <p align="center">
2
+ <img src="https://github.com/safe-autonomous-systems/moo-spread/raw/main/images/logo_well_spread.png"
3
+ alt="moospread logo" width="300">
4
+ </p>
5
+ <!--
6
+ <p align="center">
7
+ <a href="https://pypi.org/project/moospread/"><img src="https://img.shields.io/pypi/v/advermorel.svg" alt="PyPI version"></a>
8
+ </p>
9
+ -->
10
+
11
+ # SPREAD: Sampling-based Pareto front Refinement via Efficient Adaptive Diffusion
12
+
13
+ > SPREAD is a novel sampling-based approach for multi-objective optimization that leverages diffusion models to efficiently refine and generate well-spread Pareto front approximations. It combines the expressiveness of diffusion models with multi-objective optimization principles to achieve both high convergence to the Pareto front and excellent diversity across the objective space. SPREAD demonstrates competitive performance against state-of-the-art methods while providing a flexible framework for different optimization contexts.
14
+
15
+ ## 🚀 Getting Started
16
+
17
+ ### Installation
18
+
19
+ ```python
20
+ conda create -n moospread python=3.11
21
+ conda activate moospread
22
+ pip install moospread
23
+ # To install CUDA‐enabled PyTorch, run (or visit: https://pytorch.org/get-started/locally/):
24
+ pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
25
+ ```
26
+ Or, to install the latest code from GitHub:
27
+ ```python
28
+ conda create -n moospread python=3.11
29
+ conda activate moospread
30
+ git clone https://github.com/safe-autonomous-systems/moo-spread.git
31
+ cd moo-spread
32
+ pip install -e .
33
+ # To install CUDA‐enabled PyTorch, run (or visit: https://pytorch.org/get-started/locally/):
34
+ pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
35
+ ```
36
+ ### Basic usage
37
+ This example shows how to solve a standard multi-objective optimization benchmark (ZDT2) using the **SPREAD** solver.
38
+
39
+ ```python
40
+ import numpy as np
41
+ import torch
42
+
43
+ # Import the SPREAD solver
44
+ from moospread import SPREAD
45
+
46
+ # Import a test problem
47
+ from moospread.tasks import ZDT2
48
+
49
+ # Define the problem
50
+ n_var = 30
51
+ problem = ZDT2(n_var=n_var)
52
+
53
+ # Initialize the SPREAD solver
54
+ solver = SPREAD(
55
+ problem,
56
+ data_size=10000,
57
+ timesteps=5000,
58
+ num_epochs=1000,
59
+ train_tol=100,
60
+ num_blocks=3,
61
+ validation_split=0.1,
62
+ mode="online",
63
+ seed=2026,
64
+ verbose=True
65
+ )
66
+
67
+ # Solve the problem
68
+ results = solver.solve(
69
+ num_points_sample=200,
70
+ strict_guidance=False,
71
+ rho_scale_gamma=0.9,
72
+ nu_t=10.0,
73
+ eta_init=0.9,
74
+ num_inner_steps=10,
75
+ lr_inner=0.9,
76
+ free_initial_h=True,
77
+ use_sigma_rep=False,
78
+ kernel_sigma_rep=0.01,
79
+ iterative_plot=True,
80
+ plot_period=10,
81
+ max_backtracks=25,
82
+ save_results=True,
83
+ samples_store_path="./samples_dir/",
84
+ images_store_path="./images_dir/"
85
+ )
86
+ ```
87
+
88
+ This will train a diffusion-based multi-objective solver, approximate the Pareto front of the ZDT2 problem, and store generated samples and plots in the specified directories.
89
+
90
+ ---
91
+
92
+ <!--
93
+ ### 📚 Next steps
94
+
95
+ For more advanced examples (offline mode, Bayesian mode, custom problems), see the full [documentation](https://moospread.readthedocs.io/en/latest/).
96
+ -->
97
+
98
+ ## Citation
99
+ If you find `moospread` useful in your research, please consider citing:
100
+ ```
101
+ @inproceedings{
102
+ hotegni2026spread,
103
+ title={{SPREAD}: Sampling-based Pareto front Refinement via Efficient Adaptive Diffusion},
104
+ author={Hotegni, Sedjro Salomon and Peitz, Sebastian},
105
+ booktitle={The Fourteenth International Conference on Learning Representations},
106
+ year={2026},
107
+ url={https://openreview.net/forum?id=4731mIqv89}
108
+ }
109
+ ```
@@ -0,0 +1,141 @@
1
+ Metadata-Version: 2.4
2
+ Name: moospread
3
+ Version: 0.1.1
4
+ Summary: Sampling-based Pareto front Refinement via Efficient Adaptive Diffusion
5
+ Author-email: Sedjro Salomon Hotegni <salomon.hotegni@aims.ac.rw>
6
+ Maintainer-email: Sedjro Salomon Hotegni <salomon.hotegni@tu-dortmund.de>
7
+ License: MIT License
8
+
9
+ Copyright (c) 2026 Sedjro Salomon Hotegni
10
+
11
+ Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
12
+
13
+ The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
16
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
17
+
18
+ Project-URL: bugs, https://github.com/safe-autonomous-systems/moo-spread/issues
19
+ Project-URL: changelog, https://github.com/safe-autonomous-systems/moo-spread/blob/main/changelog.md
20
+ Project-URL: Homepage, https://github.com/safe-autonomous-systems/moo-spread
21
+ Requires-Python: >=3.8
22
+ Description-Content-Type: text/markdown
23
+ License-File: LICENSE
24
+ Requires-Dist: numpy
25
+ Requires-Dist: torch
26
+ Provides-Extra: dev
27
+ Requires-Dist: coverage; extra == "dev"
28
+ Requires-Dist: mypy; extra == "dev"
29
+ Requires-Dist: pytest; extra == "dev"
30
+ Requires-Dist: ruff; extra == "dev"
31
+ Dynamic: license-file
32
+
33
+ <p align="center">
34
+ <img src="https://github.com/safe-autonomous-systems/moo-spread/raw/main/images/logo_well_spread.png"
35
+ alt="moospread logo" width="300">
36
+ </p>
37
+ <!--
38
+ <p align="center">
39
+ <a href="https://pypi.org/project/moospread/"><img src="https://img.shields.io/pypi/v/advermorel.svg" alt="PyPI version"></a>
40
+ </p>
41
+ -->
42
+
43
+ # SPREAD: Sampling-based Pareto front Refinement via Efficient Adaptive Diffusion
44
+
45
+ > SPREAD is a novel sampling-based approach for multi-objective optimization that leverages diffusion models to efficiently refine and generate well-spread Pareto front approximations. It combines the expressiveness of diffusion models with multi-objective optimization principles to achieve both high convergence to the Pareto front and excellent diversity across the objective space. SPREAD demonstrates competitive performance against state-of-the-art methods while providing a flexible framework for different optimization contexts.
46
+
47
+ ## 🚀 Getting Started
48
+
49
+ ### Installation
50
+
51
+ ```python
52
+ conda create -n moospread python=3.11
53
+ conda activate moospread
54
+ pip install moospread
55
+ # To install CUDA‐enabled PyTorch, run (or visit: https://pytorch.org/get-started/locally/):
56
+ pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
57
+ ```
58
+ Or, to install the latest code from GitHub:
59
+ ```python
60
+ conda create -n moospread python=3.11
61
+ conda activate moospread
62
+ git clone https://github.com/safe-autonomous-systems/moo-spread.git
63
+ cd moo-spread
64
+ pip install -e .
65
+ # To install CUDA‐enabled PyTorch, run (or visit: https://pytorch.org/get-started/locally/):
66
+ pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
67
+ ```
68
+ ### Basic usage
69
+ This example shows how to solve a standard multi-objective optimization benchmark (ZDT2) using the **SPREAD** solver.
70
+
71
+ ```python
72
+ import numpy as np
73
+ import torch
74
+
75
+ # Import the SPREAD solver
76
+ from moospread import SPREAD
77
+
78
+ # Import a test problem
79
+ from moospread.tasks import ZDT2
80
+
81
+ # Define the problem
82
+ n_var = 30
83
+ problem = ZDT2(n_var=n_var)
84
+
85
+ # Initialize the SPREAD solver
86
+ solver = SPREAD(
87
+ problem,
88
+ data_size=10000,
89
+ timesteps=5000,
90
+ num_epochs=1000,
91
+ train_tol=100,
92
+ num_blocks=3,
93
+ validation_split=0.1,
94
+ mode="online",
95
+ seed=2026,
96
+ verbose=True
97
+ )
98
+
99
+ # Solve the problem
100
+ results = solver.solve(
101
+ num_points_sample=200,
102
+ strict_guidance=False,
103
+ rho_scale_gamma=0.9,
104
+ nu_t=10.0,
105
+ eta_init=0.9,
106
+ num_inner_steps=10,
107
+ lr_inner=0.9,
108
+ free_initial_h=True,
109
+ use_sigma_rep=False,
110
+ kernel_sigma_rep=0.01,
111
+ iterative_plot=True,
112
+ plot_period=10,
113
+ max_backtracks=25,
114
+ save_results=True,
115
+ samples_store_path="./samples_dir/",
116
+ images_store_path="./images_dir/"
117
+ )
118
+ ```
119
+
120
+ This will train a diffusion-based multi-objective solver, approximate the Pareto front of the ZDT2 problem, and store generated samples and plots in the specified directories.
121
+
122
+ ---
123
+
124
+ <!--
125
+ ### 📚 Next steps
126
+
127
+ For more advanced examples (offline mode, Bayesian mode, custom problems), see the full [documentation](https://moospread.readthedocs.io/en/latest/).
128
+ -->
129
+
130
+ ## Citation
131
+ If you find `moospread` useful in your research, please consider citing:
132
+ ```
133
+ @inproceedings{
134
+ hotegni2026spread,
135
+ title={{SPREAD}: Sampling-based Pareto front Refinement via Efficient Adaptive Diffusion},
136
+ author={Hotegni, Sedjro Salomon and Peitz, Sebastian},
137
+ booktitle={The Fourteenth International Conference on Learning Representations},
138
+ year={2026},
139
+ url={https://openreview.net/forum?id=4731mIqv89}
140
+ }
141
+ ```
@@ -1,5 +1,6 @@
1
1
  LICENSE
2
2
  README.md
3
+ README_PYPI.md
3
4
  pyproject.toml
4
5
  moospread/__init__.py
5
6
  moospread/core.py
@@ -4,9 +4,9 @@ build-backend = "setuptools.build_meta"
4
4
 
5
5
  [project]
6
6
  name = "moospread"
7
- version = "0.1.0"
7
+ version = "0.1.1"
8
8
  description = "Sampling-based Pareto front Refinement via Efficient Adaptive Diffusion"
9
- readme = "README.md"
9
+ readme = "README_PYPI.md"
10
10
  requires-python = ">=3.8"
11
11
  license = { file = "LICENSE" }
12
12
  authors = [
moospread-0.1.0/PKG-INFO DELETED
@@ -1,75 +0,0 @@
1
- Metadata-Version: 2.4
2
- Name: moospread
3
- Version: 0.1.0
4
- Summary: Sampling-based Pareto front Refinement via Efficient Adaptive Diffusion
5
- Author-email: Sedjro Salomon Hotegni <salomon.hotegni@aims.ac.rw>
6
- Maintainer-email: Sedjro Salomon Hotegni <salomon.hotegni@tu-dortmund.de>
7
- License: MIT License
8
-
9
- Copyright (c) 2026 Sedjro Salomon Hotegni
10
-
11
- Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
12
-
13
- The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
14
-
15
- THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
16
- OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
17
-
18
- Project-URL: bugs, https://github.com/safe-autonomous-systems/moo-spread/issues
19
- Project-URL: changelog, https://github.com/safe-autonomous-systems/moo-spread/blob/main/changelog.md
20
- Project-URL: Homepage, https://github.com/safe-autonomous-systems/moo-spread
21
- Requires-Python: >=3.8
22
- Description-Content-Type: text/markdown
23
- License-File: LICENSE
24
- Requires-Dist: numpy
25
- Requires-Dist: torch
26
- Provides-Extra: dev
27
- Requires-Dist: coverage; extra == "dev"
28
- Requires-Dist: mypy; extra == "dev"
29
- Requires-Dist: pytest; extra == "dev"
30
- Requires-Dist: ruff; extra == "dev"
31
- Dynamic: license-file
32
-
33
- <p align="center">
34
- <img src="/images/logo_well_spread.png" style="width: 30%; height: auto;">
35
- </p>
36
-
37
- # SPREAD: Sampling-based Pareto front Refinement via Efficient Adaptive Diffusion
38
-
39
- > SPREAD is a novel sampling-based approach for multi-objective optimization that leverages diffusion models to efficiently refine and generate well-spread Pareto front approximations. It combines the expressiveness of diffusion models with multi-objective optimization principles to achieve both high convergence to the Pareto front and excellent diversity across the objective space. SPREAD demonstrates competitive performance against state-of-the-art methods while providing a flexible framework for different optimization contexts.
40
-
41
- ### 🔬 Experiments
42
-
43
- All experiment code is contained in the `/experiments` directory:
44
-
45
- * **Online setting:** `/experiments/spread/`
46
- * **Offline setting:** `/experiments/spread_offline/`
47
- * **Bayesian setting:** `/experiments/spread_bayesian/`
48
-
49
- The following Jupyter notebooks reproduce the plots shown in our paper:
50
-
51
- * `/experiments/spread/notebook_online_spread.ipynb`
52
- * `/experiments/spread_bayesian/notebook_bayesian_spread.ipynb`
53
-
54
- ### ⚙️ Environment Setup
55
-
56
- Each experiment setting comes with its own environment file located in the corresponding folder:
57
-
58
- - Online setting: `experiments/spread/spread.yml`
59
- - Offline setting: `experiments/spread_offline/spread_off.yml`
60
- - Bayesian setting: `experiments/spread_bayesian/spread_bay.yml`
61
-
62
- To create the environment for a given setting, run:
63
- ```bash
64
- conda env create -f experiments/<folder>/<env_name>.yml
65
- conda activate <env_name>
66
- ```
67
- For example, to run the online experiments:
68
- ```bash
69
- conda env create -f experiments/spread/spread.yml
70
- conda activate spread
71
- ```
72
- The offline experiments require installing **Off-MOO-Bench** from the authors’ public repository: https://github.com/lamda-bbo/offline-moo. The datasets should be downloaded into the folder: `experiments/spread_offline/offline_moo/data/`.
73
-
74
-
75
-
moospread-0.1.0/README.md DELETED
@@ -1,43 +0,0 @@
1
- <p align="center">
2
- <img src="/images/logo_well_spread.png" style="width: 30%; height: auto;">
3
- </p>
4
-
5
- # SPREAD: Sampling-based Pareto front Refinement via Efficient Adaptive Diffusion
6
-
7
- > SPREAD is a novel sampling-based approach for multi-objective optimization that leverages diffusion models to efficiently refine and generate well-spread Pareto front approximations. It combines the expressiveness of diffusion models with multi-objective optimization principles to achieve both high convergence to the Pareto front and excellent diversity across the objective space. SPREAD demonstrates competitive performance against state-of-the-art methods while providing a flexible framework for different optimization contexts.
8
-
9
- ### 🔬 Experiments
10
-
11
- All experiment code is contained in the `/experiments` directory:
12
-
13
- * **Online setting:** `/experiments/spread/`
14
- * **Offline setting:** `/experiments/spread_offline/`
15
- * **Bayesian setting:** `/experiments/spread_bayesian/`
16
-
17
- The following Jupyter notebooks reproduce the plots shown in our paper:
18
-
19
- * `/experiments/spread/notebook_online_spread.ipynb`
20
- * `/experiments/spread_bayesian/notebook_bayesian_spread.ipynb`
21
-
22
- ### ⚙️ Environment Setup
23
-
24
- Each experiment setting comes with its own environment file located in the corresponding folder:
25
-
26
- - Online setting: `experiments/spread/spread.yml`
27
- - Offline setting: `experiments/spread_offline/spread_off.yml`
28
- - Bayesian setting: `experiments/spread_bayesian/spread_bay.yml`
29
-
30
- To create the environment for a given setting, run:
31
- ```bash
32
- conda env create -f experiments/<folder>/<env_name>.yml
33
- conda activate <env_name>
34
- ```
35
- For example, to run the online experiments:
36
- ```bash
37
- conda env create -f experiments/spread/spread.yml
38
- conda activate spread
39
- ```
40
- The offline experiments require installing **Off-MOO-Bench** from the authors’ public repository: https://github.com/lamda-bbo/offline-moo. The datasets should be downloaded into the folder: `experiments/spread_offline/offline_moo/data/`.
41
-
42
-
43
-
@@ -1,75 +0,0 @@
1
- Metadata-Version: 2.4
2
- Name: moospread
3
- Version: 0.1.0
4
- Summary: Sampling-based Pareto front Refinement via Efficient Adaptive Diffusion
5
- Author-email: Sedjro Salomon Hotegni <salomon.hotegni@aims.ac.rw>
6
- Maintainer-email: Sedjro Salomon Hotegni <salomon.hotegni@tu-dortmund.de>
7
- License: MIT License
8
-
9
- Copyright (c) 2026 Sedjro Salomon Hotegni
10
-
11
- Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
12
-
13
- The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
14
-
15
- THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
16
- OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
17
-
18
- Project-URL: bugs, https://github.com/safe-autonomous-systems/moo-spread/issues
19
- Project-URL: changelog, https://github.com/safe-autonomous-systems/moo-spread/blob/main/changelog.md
20
- Project-URL: Homepage, https://github.com/safe-autonomous-systems/moo-spread
21
- Requires-Python: >=3.8
22
- Description-Content-Type: text/markdown
23
- License-File: LICENSE
24
- Requires-Dist: numpy
25
- Requires-Dist: torch
26
- Provides-Extra: dev
27
- Requires-Dist: coverage; extra == "dev"
28
- Requires-Dist: mypy; extra == "dev"
29
- Requires-Dist: pytest; extra == "dev"
30
- Requires-Dist: ruff; extra == "dev"
31
- Dynamic: license-file
32
-
33
- <p align="center">
34
- <img src="/images/logo_well_spread.png" style="width: 30%; height: auto;">
35
- </p>
36
-
37
- # SPREAD: Sampling-based Pareto front Refinement via Efficient Adaptive Diffusion
38
-
39
- > SPREAD is a novel sampling-based approach for multi-objective optimization that leverages diffusion models to efficiently refine and generate well-spread Pareto front approximations. It combines the expressiveness of diffusion models with multi-objective optimization principles to achieve both high convergence to the Pareto front and excellent diversity across the objective space. SPREAD demonstrates competitive performance against state-of-the-art methods while providing a flexible framework for different optimization contexts.
40
-
41
- ### 🔬 Experiments
42
-
43
- All experiment code is contained in the `/experiments` directory:
44
-
45
- * **Online setting:** `/experiments/spread/`
46
- * **Offline setting:** `/experiments/spread_offline/`
47
- * **Bayesian setting:** `/experiments/spread_bayesian/`
48
-
49
- The following Jupyter notebooks reproduce the plots shown in our paper:
50
-
51
- * `/experiments/spread/notebook_online_spread.ipynb`
52
- * `/experiments/spread_bayesian/notebook_bayesian_spread.ipynb`
53
-
54
- ### ⚙️ Environment Setup
55
-
56
- Each experiment setting comes with its own environment file located in the corresponding folder:
57
-
58
- - Online setting: `experiments/spread/spread.yml`
59
- - Offline setting: `experiments/spread_offline/spread_off.yml`
60
- - Bayesian setting: `experiments/spread_bayesian/spread_bay.yml`
61
-
62
- To create the environment for a given setting, run:
63
- ```bash
64
- conda env create -f experiments/<folder>/<env_name>.yml
65
- conda activate <env_name>
66
- ```
67
- For example, to run the online experiments:
68
- ```bash
69
- conda env create -f experiments/spread/spread.yml
70
- conda activate spread
71
- ```
72
- The offline experiments require installing **Off-MOO-Bench** from the authors’ public repository: https://github.com/lamda-bbo/offline-moo. The datasets should be downloaded into the folder: `experiments/spread_offline/offline_moo/data/`.
73
-
74
-
75
-
File without changes
File without changes
File without changes