autogluon.tabular 1.3.2b20250610__py3-none-any.whl → 1.4.1b20251214__py3-none-any.whl
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- autogluon/tabular/configs/config_helper.py +1 -1
- autogluon/tabular/configs/hyperparameter_configs.py +2 -265
- autogluon/tabular/configs/pipeline_presets.py +130 -0
- autogluon/tabular/configs/presets_configs.py +51 -26
- autogluon/tabular/configs/zeroshot/zeroshot_portfolio_2023.py +0 -1
- autogluon/tabular/configs/zeroshot/zeroshot_portfolio_2025.py +310 -0
- autogluon/tabular/models/__init__.py +6 -1
- autogluon/tabular/models/_utils/rapids_utils.py +1 -1
- autogluon/tabular/models/automm/automm_model.py +2 -0
- autogluon/tabular/models/automm/ft_transformer.py +4 -1
- autogluon/tabular/models/catboost/callbacks.py +3 -2
- autogluon/tabular/models/catboost/catboost_model.py +15 -9
- autogluon/tabular/models/catboost/catboost_utils.py +17 -3
- autogluon/tabular/models/ebm/__init__.py +0 -0
- autogluon/tabular/models/ebm/ebm_model.py +259 -0
- autogluon/tabular/models/ebm/hyperparameters/__init__.py +0 -0
- autogluon/tabular/models/ebm/hyperparameters/parameters.py +39 -0
- autogluon/tabular/models/ebm/hyperparameters/searchspaces.py +72 -0
- autogluon/tabular/models/fastainn/tabular_nn_fastai.py +7 -5
- autogluon/tabular/models/knn/knn_model.py +7 -3
- autogluon/tabular/models/lgb/lgb_model.py +60 -21
- autogluon/tabular/models/lr/lr_model.py +6 -1
- autogluon/tabular/models/lr/lr_preprocessing_utils.py +6 -7
- autogluon/tabular/models/lr/lr_rapids_model.py +45 -5
- autogluon/tabular/models/mitra/__init__.py +0 -0
- autogluon/tabular/models/mitra/_internal/__init__.py +1 -0
- autogluon/tabular/models/mitra/_internal/config/__init__.py +1 -0
- autogluon/tabular/models/mitra/_internal/config/config_pretrain.py +190 -0
- autogluon/tabular/models/mitra/_internal/config/config_run.py +32 -0
- autogluon/tabular/models/mitra/_internal/config/enums.py +162 -0
- autogluon/tabular/models/mitra/_internal/core/__init__.py +1 -0
- autogluon/tabular/models/mitra/_internal/core/callbacks.py +94 -0
- autogluon/tabular/models/mitra/_internal/core/get_loss.py +54 -0
- autogluon/tabular/models/mitra/_internal/core/get_optimizer.py +108 -0
- autogluon/tabular/models/mitra/_internal/core/get_scheduler.py +67 -0
- autogluon/tabular/models/mitra/_internal/core/prediction_metrics.py +132 -0
- autogluon/tabular/models/mitra/_internal/core/trainer_finetune.py +373 -0
- autogluon/tabular/models/mitra/_internal/data/__init__.py +1 -0
- autogluon/tabular/models/mitra/_internal/data/collator.py +46 -0
- autogluon/tabular/models/mitra/_internal/data/dataset_finetune.py +136 -0
- autogluon/tabular/models/mitra/_internal/data/dataset_split.py +57 -0
- autogluon/tabular/models/mitra/_internal/data/preprocessor.py +420 -0
- autogluon/tabular/models/mitra/_internal/models/__init__.py +1 -0
- autogluon/tabular/models/mitra/_internal/models/base.py +21 -0
- autogluon/tabular/models/mitra/_internal/models/embedding.py +182 -0
- autogluon/tabular/models/mitra/_internal/models/tab2d.py +667 -0
- autogluon/tabular/models/mitra/_internal/utils/__init__.py +1 -0
- autogluon/tabular/models/mitra/_internal/utils/set_seed.py +15 -0
- autogluon/tabular/models/mitra/mitra_model.py +380 -0
- autogluon/tabular/models/mitra/sklearn_interface.py +494 -0
- autogluon/tabular/models/realmlp/__init__.py +0 -0
- autogluon/tabular/models/realmlp/realmlp_model.py +360 -0
- autogluon/tabular/models/rf/rf_model.py +11 -6
- autogluon/tabular/models/tabicl/__init__.py +0 -0
- autogluon/tabular/models/tabicl/tabicl_model.py +179 -0
- autogluon/tabular/models/tabm/__init__.py +0 -0
- autogluon/tabular/models/tabm/_tabm_internal.py +545 -0
- autogluon/tabular/models/tabm/rtdl_num_embeddings.py +810 -0
- autogluon/tabular/models/tabm/tabm_model.py +356 -0
- autogluon/tabular/models/tabm/tabm_reference.py +631 -0
- autogluon/tabular/models/tabpfnmix/tabpfnmix_model.py +13 -7
- autogluon/tabular/models/tabpfnv2/__init__.py +0 -0
- autogluon/tabular/models/tabpfnv2/rfpfn/__init__.py +20 -0
- autogluon/tabular/models/tabpfnv2/rfpfn/configs.py +40 -0
- autogluon/tabular/models/tabpfnv2/rfpfn/scoring_utils.py +201 -0
- autogluon/tabular/models/tabpfnv2/rfpfn/sklearn_based_decision_tree_tabpfn.py +1464 -0
- autogluon/tabular/models/tabpfnv2/rfpfn/sklearn_based_random_forest_tabpfn.py +747 -0
- autogluon/tabular/models/tabpfnv2/rfpfn/sklearn_compat.py +863 -0
- autogluon/tabular/models/tabpfnv2/rfpfn/utils.py +106 -0
- autogluon/tabular/models/tabpfnv2/tabpfnv2_model.py +388 -0
- autogluon/tabular/models/tabular_nn/hyperparameters/parameters.py +1 -3
- autogluon/tabular/models/tabular_nn/torch/tabular_nn_torch.py +5 -5
- autogluon/tabular/models/xgboost/xgboost_model.py +10 -3
- autogluon/tabular/predictor/predictor.py +147 -84
- autogluon/tabular/registry/_ag_model_registry.py +12 -2
- autogluon/tabular/testing/fit_helper.py +57 -27
- autogluon/tabular/testing/generate_datasets.py +7 -0
- autogluon/tabular/trainer/abstract_trainer.py +3 -1
- autogluon/tabular/trainer/model_presets/presets.py +10 -1
- autogluon/tabular/version.py +1 -1
- autogluon.tabular-1.4.1b20251214-py3.11-nspkg.pth +1 -0
- {autogluon.tabular-1.3.2b20250610.dist-info → autogluon_tabular-1.4.1b20251214.dist-info}/METADATA +112 -57
- {autogluon.tabular-1.3.2b20250610.dist-info → autogluon_tabular-1.4.1b20251214.dist-info}/RECORD +89 -40
- {autogluon.tabular-1.3.2b20250610.dist-info → autogluon_tabular-1.4.1b20251214.dist-info}/WHEEL +1 -1
- autogluon/tabular/models/tabpfn/__init__.py +0 -1
- autogluon/tabular/models/tabpfn/tabpfn_model.py +0 -153
- autogluon.tabular-1.3.2b20250610-py3.9-nspkg.pth +0 -1
- {autogluon.tabular-1.3.2b20250610.dist-info → autogluon_tabular-1.4.1b20251214.dist-info/licenses}/LICENSE +0 -0
- {autogluon.tabular-1.3.2b20250610.dist-info → autogluon_tabular-1.4.1b20251214.dist-info/licenses}/NOTICE +0 -0
- {autogluon.tabular-1.3.2b20250610.dist-info → autogluon_tabular-1.4.1b20251214.dist-info}/namespace_packages.txt +0 -0
- {autogluon.tabular-1.3.2b20250610.dist-info → autogluon_tabular-1.4.1b20251214.dist-info}/top_level.txt +0 -0
- {autogluon.tabular-1.3.2b20250610.dist-info → autogluon_tabular-1.4.1b20251214.dist-info}/zip-safe +0 -0
|
@@ -21,6 +21,7 @@ from autogluon.tabular.testing.generate_datasets import (
|
|
|
21
21
|
generate_toy_multiclass_dataset,
|
|
22
22
|
generate_toy_regression_dataset,
|
|
23
23
|
generate_toy_quantile_dataset,
|
|
24
|
+
generate_toy_quantile_single_level_dataset,
|
|
24
25
|
generate_toy_multiclass_10_dataset,
|
|
25
26
|
generate_toy_regression_10_dataset,
|
|
26
27
|
generate_toy_quantile_10_dataset,
|
|
@@ -72,6 +73,7 @@ class DatasetLoaderHelper:
|
|
|
72
73
|
toy_multiclass=generate_toy_multiclass_dataset,
|
|
73
74
|
toy_regression=generate_toy_regression_dataset,
|
|
74
75
|
toy_quantile=generate_toy_quantile_dataset,
|
|
76
|
+
toy_quantile_single_level=generate_toy_quantile_single_level_dataset,
|
|
75
77
|
toy_binary_10=generate_toy_binary_10_dataset,
|
|
76
78
|
toy_multiclass_10=generate_toy_multiclass_10_dataset,
|
|
77
79
|
toy_regression_10=generate_toy_regression_10_dataset,
|
|
@@ -173,6 +175,7 @@ class FitHelper:
|
|
|
173
175
|
use_test_for_val: bool = False,
|
|
174
176
|
raise_on_model_failure: bool | None = None,
|
|
175
177
|
deepcopy_fit_args: bool = True,
|
|
178
|
+
verify_model_seed: bool = False,
|
|
176
179
|
) -> TabularPredictor:
|
|
177
180
|
if compiler_configs is None:
|
|
178
181
|
compiler_configs = {}
|
|
@@ -267,6 +270,11 @@ class FitHelper:
|
|
|
267
270
|
assert not model_info["val_in_fit"], f"val data must not be present in refit model if `can_refit_full=True`. Maybe an exception occurred?"
|
|
268
271
|
else:
|
|
269
272
|
assert model_info["val_in_fit"], f"val data must be present in refit model if `can_refit_full=False`"
|
|
273
|
+
if verify_model_seed:
|
|
274
|
+
model_names = predictor.model_names()
|
|
275
|
+
for model_name in model_names:
|
|
276
|
+
model = predictor._trainer.load_model(model_name)
|
|
277
|
+
_verify_model_seed(model=model)
|
|
270
278
|
|
|
271
279
|
if predictor_info:
|
|
272
280
|
predictor.info()
|
|
@@ -337,6 +345,7 @@ class FitHelper:
|
|
|
337
345
|
require_known_problem_types: bool = True,
|
|
338
346
|
raise_on_model_failure: bool = True,
|
|
339
347
|
problem_types: list[str] | None = None,
|
|
348
|
+
verify_model_seed: bool = True,
|
|
340
349
|
**kwargs,
|
|
341
350
|
):
|
|
342
351
|
"""
|
|
@@ -353,12 +362,18 @@ class FitHelper:
|
|
|
353
362
|
problem_types: list[str], optional
|
|
354
363
|
If specified, checks the given problem_types.
|
|
355
364
|
If None, checks `model_cls.supported_problem_types()`
|
|
365
|
+
verify_model_seed: bool = True
|
|
356
366
|
**kwargs
|
|
357
367
|
|
|
358
368
|
Returns
|
|
359
369
|
-------
|
|
360
370
|
|
|
361
371
|
"""
|
|
372
|
+
if verify_model_seed and model_cls.seed_name is not None:
|
|
373
|
+
# verify that the seed logic works
|
|
374
|
+
model_hyperparameters = model_hyperparameters.copy()
|
|
375
|
+
model_hyperparameters[model_cls.seed_name] = 42
|
|
376
|
+
|
|
362
377
|
fit_args = dict(
|
|
363
378
|
hyperparameters={model_cls: model_hyperparameters},
|
|
364
379
|
)
|
|
@@ -393,10 +408,10 @@ class FitHelper:
|
|
|
393
408
|
)
|
|
394
409
|
|
|
395
410
|
problem_type_dataset_map = {
|
|
396
|
-
"binary": "toy_binary",
|
|
397
|
-
"multiclass": "toy_multiclass",
|
|
398
|
-
"regression": "toy_regression",
|
|
399
|
-
"quantile": "toy_quantile",
|
|
411
|
+
"binary": ["toy_binary"],
|
|
412
|
+
"multiclass": ["toy_multiclass"],
|
|
413
|
+
"regression": ["toy_regression"],
|
|
414
|
+
"quantile": ["toy_quantile", "toy_quantile_single_level"],
|
|
400
415
|
}
|
|
401
416
|
|
|
402
417
|
problem_types_refit_full = []
|
|
@@ -419,29 +434,30 @@ class FitHelper:
|
|
|
419
434
|
if extra_metrics:
|
|
420
435
|
_extra_metrics = METRICS.get(problem_type, None)
|
|
421
436
|
refit_full = problem_type in problem_types_refit_full
|
|
422
|
-
dataset_name
|
|
423
|
-
|
|
424
|
-
|
|
425
|
-
|
|
426
|
-
|
|
427
|
-
|
|
428
|
-
|
|
429
|
-
|
|
430
|
-
|
|
431
|
-
|
|
437
|
+
for dataset_name in problem_type_dataset_map[problem_type]:
|
|
438
|
+
FitHelper.fit_and_validate_dataset(
|
|
439
|
+
dataset_name=dataset_name,
|
|
440
|
+
fit_args=fit_args,
|
|
441
|
+
fit_weighted_ensemble=False,
|
|
442
|
+
refit_full=refit_full,
|
|
443
|
+
extra_metrics=_extra_metrics,
|
|
444
|
+
raise_on_model_failure=raise_on_model_failure,
|
|
445
|
+
verify_model_seed=verify_model_seed,
|
|
446
|
+
**kwargs,
|
|
447
|
+
)
|
|
432
448
|
|
|
433
449
|
if bag:
|
|
434
450
|
model_params_bag = copy.deepcopy(model_hyperparameters)
|
|
435
|
-
model_params_bag["
|
|
451
|
+
model_params_bag["ag.ens.fold_fitting_strategy"] = "sequential_local"
|
|
436
452
|
fit_args_bag = dict(
|
|
437
453
|
hyperparameters={model_cls: model_params_bag},
|
|
438
454
|
num_bag_folds=2,
|
|
439
455
|
num_bag_sets=1,
|
|
440
456
|
)
|
|
441
457
|
if isinstance(bag, bool):
|
|
442
|
-
problem_types_bag =
|
|
458
|
+
problem_types_bag = problem_types_to_check
|
|
443
459
|
elif bag == "first":
|
|
444
|
-
problem_types_bag =
|
|
460
|
+
problem_types_bag = problem_types_to_check[:1]
|
|
445
461
|
else:
|
|
446
462
|
raise ValueError(f"Unknown 'bag' value: {bag}")
|
|
447
463
|
|
|
@@ -450,16 +466,17 @@ class FitHelper:
|
|
|
450
466
|
if extra_metrics:
|
|
451
467
|
_extra_metrics = METRICS.get(problem_type, None)
|
|
452
468
|
refit_full = problem_type in problem_types_refit_full
|
|
453
|
-
dataset_name
|
|
454
|
-
|
|
455
|
-
|
|
456
|
-
|
|
457
|
-
|
|
458
|
-
|
|
459
|
-
|
|
460
|
-
|
|
461
|
-
|
|
462
|
-
|
|
469
|
+
for dataset_name in problem_type_dataset_map[problem_type]:
|
|
470
|
+
FitHelper.fit_and_validate_dataset(
|
|
471
|
+
dataset_name=dataset_name,
|
|
472
|
+
fit_args=fit_args_bag,
|
|
473
|
+
fit_weighted_ensemble=False,
|
|
474
|
+
refit_full=refit_full,
|
|
475
|
+
extra_metrics=_extra_metrics,
|
|
476
|
+
raise_on_model_failure=raise_on_model_failure,
|
|
477
|
+
verify_model_seed=verify_model_seed,
|
|
478
|
+
**kwargs,
|
|
479
|
+
)
|
|
463
480
|
|
|
464
481
|
|
|
465
482
|
def stacked_overfitting_assert(
|
|
@@ -474,3 +491,16 @@ def stacked_overfitting_assert(
|
|
|
474
491
|
if expected_stacked_overfitting_at_test is not None:
|
|
475
492
|
stacked_overfitting = check_stacked_overfitting_from_leaderboard(lb)
|
|
476
493
|
assert stacked_overfitting == expected_stacked_overfitting_at_test, "Expected stacked overfitting at test mismatch!"
|
|
494
|
+
|
|
495
|
+
|
|
496
|
+
def _verify_model_seed(model: AbstractModel):
|
|
497
|
+
assert model.random_seed is None or isinstance(model.random_seed, int)
|
|
498
|
+
if model.seed_name is not None:
|
|
499
|
+
if model.seed_name in model._user_params:
|
|
500
|
+
assert model.random_seed == model._user_params[model.seed_name]
|
|
501
|
+
assert model.seed_name in model.params
|
|
502
|
+
assert model.random_seed == model.params[model.seed_name]
|
|
503
|
+
if isinstance(model, BaggedEnsembleModel):
|
|
504
|
+
for child in model.models:
|
|
505
|
+
child = model.load_child(child)
|
|
506
|
+
_verify_model_seed(child)
|
|
@@ -64,6 +64,13 @@ def generate_toy_quantile_dataset():
|
|
|
64
64
|
return train_data, test_data, dataset_info
|
|
65
65
|
|
|
66
66
|
|
|
67
|
+
def generate_toy_quantile_single_level_dataset():
|
|
68
|
+
train_data, test_data, dataset_info = generate_toy_regression_dataset()
|
|
69
|
+
dataset_info["problem_type"] = QUANTILE
|
|
70
|
+
dataset_info["init_kwargs"] = {"quantile_levels": [0.71]}
|
|
71
|
+
return train_data, test_data, dataset_info
|
|
72
|
+
|
|
73
|
+
|
|
67
74
|
def generate_toy_binary_10_dataset():
|
|
68
75
|
label = "label"
|
|
69
76
|
dummy_dataset = {
|
|
@@ -2131,6 +2131,8 @@ class AbstractTabularTrainer(AbstractTrainer[AbstractModel]):
|
|
|
2131
2131
|
if isinstance(model, BaggedEnsembleModel) and not compute_score:
|
|
2132
2132
|
# Do not perform OOF predictions when we don't compute a score.
|
|
2133
2133
|
model_fit_kwargs["_skip_oof"] = True
|
|
2134
|
+
if not isinstance(model, BaggedEnsembleModel):
|
|
2135
|
+
model_fit_kwargs.setdefault("log_resources", True)
|
|
2134
2136
|
|
|
2135
2137
|
model_fit_kwargs = dict(
|
|
2136
2138
|
model=model,
|
|
@@ -2950,7 +2952,7 @@ class AbstractTabularTrainer(AbstractTrainer[AbstractModel]):
|
|
|
2950
2952
|
if fit_strategy == "parallel":
|
|
2951
2953
|
num_cpus = kwargs.get("total_resources", {}).get("num_cpus", "auto")
|
|
2952
2954
|
if isinstance(num_cpus, str) and num_cpus == "auto":
|
|
2953
|
-
num_cpus = get_resource_manager().
|
|
2955
|
+
num_cpus = get_resource_manager().get_cpu_count()
|
|
2954
2956
|
if num_cpus < 12:
|
|
2955
2957
|
force_parallel = os.environ.get("AG_FORCE_PARALLEL", "False") == "True"
|
|
2956
2958
|
if not force_parallel:
|
|
@@ -311,6 +311,16 @@ def model_factory(
|
|
|
311
311
|
model_params.pop(AG_ARGS, None)
|
|
312
312
|
model_params.pop(AG_ARGS_ENSEMBLE, None)
|
|
313
313
|
|
|
314
|
+
extra_ensemble_hyperparameters = copy.deepcopy(model.get(AG_ARGS_ENSEMBLE, dict()))
|
|
315
|
+
|
|
316
|
+
# Enable user to pass ensemble hyperparameters via `"ag.ens.fold_fitting_strategy": "sequential_local"`
|
|
317
|
+
ag_args_ensemble_prefix = "ag.ens."
|
|
318
|
+
model_param_keys = list(model_params.keys())
|
|
319
|
+
for key in model_param_keys:
|
|
320
|
+
if key.startswith(ag_args_ensemble_prefix):
|
|
321
|
+
key_suffix = key.split(ag_args_ensemble_prefix, 1)[-1]
|
|
322
|
+
extra_ensemble_hyperparameters[key_suffix] = model_params.pop(key)
|
|
323
|
+
|
|
314
324
|
model_init_kwargs = dict(
|
|
315
325
|
path=path,
|
|
316
326
|
name=name,
|
|
@@ -321,7 +331,6 @@ def model_factory(
|
|
|
321
331
|
|
|
322
332
|
if ensemble_kwargs is not None:
|
|
323
333
|
ensemble_kwargs_model = copy.deepcopy(ensemble_kwargs)
|
|
324
|
-
extra_ensemble_hyperparameters = copy.deepcopy(model.get(AG_ARGS_ENSEMBLE, dict()))
|
|
325
334
|
ensemble_kwargs_model["hyperparameters"] = ensemble_kwargs_model.get("hyperparameters", {})
|
|
326
335
|
if ensemble_kwargs_model["hyperparameters"] is None:
|
|
327
336
|
ensemble_kwargs_model["hyperparameters"] = {}
|
autogluon/tabular/version.py
CHANGED
|
@@ -0,0 +1 @@
|
|
|
1
|
+
import sys, types, os;p = os.path.join(sys._getframe(1).f_locals['sitedir'], *('autogluon',));importlib = __import__('importlib.util');__import__('importlib.machinery');m = sys.modules.setdefault('autogluon', importlib.util.module_from_spec(importlib.machinery.PathFinder.find_spec('autogluon', [os.path.dirname(p)])));m = m or sys.modules.setdefault('autogluon', types.ModuleType('autogluon'));mp = (m or []) and m.__dict__.setdefault('__path__',[]);(p not in mp) and mp.append(p)
|
{autogluon.tabular-1.3.2b20250610.dist-info → autogluon_tabular-1.4.1b20251214.dist-info}/METADATA
RENAMED
|
@@ -1,6 +1,6 @@
|
|
|
1
|
-
Metadata-Version: 2.
|
|
1
|
+
Metadata-Version: 2.4
|
|
2
2
|
Name: autogluon.tabular
|
|
3
|
-
Version: 1.
|
|
3
|
+
Version: 1.4.1b20251214
|
|
4
4
|
Summary: Fast and Accurate ML in 3 Lines of Code
|
|
5
5
|
Home-page: https://github.com/autogluon/autogluon
|
|
6
6
|
Author: AutoGluon Community
|
|
@@ -9,7 +9,6 @@ Project-URL: Documentation, https://auto.gluon.ai
|
|
|
9
9
|
Project-URL: Bug Reports, https://github.com/autogluon/autogluon/issues
|
|
10
10
|
Project-URL: Source, https://github.com/autogluon/autogluon/
|
|
11
11
|
Project-URL: Contribute!, https://github.com/autogluon/autogluon/blob/master/CONTRIBUTING.md
|
|
12
|
-
Platform: UNKNOWN
|
|
13
12
|
Classifier: Development Status :: 4 - Beta
|
|
14
13
|
Classifier: Intended Audience :: Education
|
|
15
14
|
Classifier: Intended Audience :: Developers
|
|
@@ -24,75 +23,130 @@ Classifier: Operating System :: Microsoft :: Windows
|
|
|
24
23
|
Classifier: Operating System :: POSIX
|
|
25
24
|
Classifier: Operating System :: Unix
|
|
26
25
|
Classifier: Programming Language :: Python :: 3
|
|
27
|
-
Classifier: Programming Language :: Python :: 3.9
|
|
28
26
|
Classifier: Programming Language :: Python :: 3.10
|
|
29
27
|
Classifier: Programming Language :: Python :: 3.11
|
|
30
28
|
Classifier: Programming Language :: Python :: 3.12
|
|
29
|
+
Classifier: Programming Language :: Python :: 3.13
|
|
31
30
|
Classifier: Topic :: Software Development
|
|
32
31
|
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
|
|
33
32
|
Classifier: Topic :: Scientific/Engineering :: Information Analysis
|
|
34
33
|
Classifier: Topic :: Scientific/Engineering :: Image Recognition
|
|
35
|
-
Requires-Python: >=3.
|
|
34
|
+
Requires-Python: >=3.10, <3.14
|
|
36
35
|
Description-Content-Type: text/markdown
|
|
37
|
-
License-File:
|
|
38
|
-
License-File:
|
|
39
|
-
Requires-Dist: numpy<2.
|
|
40
|
-
Requires-Dist: scipy<1.
|
|
41
|
-
Requires-Dist: pandas<2.
|
|
42
|
-
Requires-Dist: scikit-learn<1.
|
|
36
|
+
License-File: LICENSE
|
|
37
|
+
License-File: NOTICE
|
|
38
|
+
Requires-Dist: numpy<2.4.0,>=1.25.0
|
|
39
|
+
Requires-Dist: scipy<1.17,>=1.5.4
|
|
40
|
+
Requires-Dist: pandas<2.4.0,>=2.0.0
|
|
41
|
+
Requires-Dist: scikit-learn<1.8.0,>=1.4.0
|
|
43
42
|
Requires-Dist: networkx<4,>=3.0
|
|
44
|
-
Requires-Dist: autogluon.core==1.
|
|
45
|
-
Requires-Dist: autogluon.features==1.
|
|
46
|
-
Provides-Extra:
|
|
47
|
-
Requires-Dist:
|
|
48
|
-
Requires-Dist: autogluon.core[all]==1.3.2b20250610; extra == "all"
|
|
49
|
-
Requires-Dist: xgboost<3.1,>=2.0; extra == "all"
|
|
50
|
-
Requires-Dist: huggingface-hub[torch]; extra == "all"
|
|
51
|
-
Requires-Dist: lightgbm<4.7,>=4.0; extra == "all"
|
|
52
|
-
Requires-Dist: torch<2.7,>=2.2; extra == "all"
|
|
53
|
-
Requires-Dist: einops<0.9,>=0.7; extra == "all"
|
|
54
|
-
Requires-Dist: numpy<2.3.0,>=1.25; extra == "all"
|
|
55
|
-
Requires-Dist: fastai<2.9,>=2.3.1; extra == "all"
|
|
56
|
-
Requires-Dist: spacy<3.9; extra == "all"
|
|
43
|
+
Requires-Dist: autogluon.core==1.4.1b20251214
|
|
44
|
+
Requires-Dist: autogluon.features==1.4.1b20251214
|
|
45
|
+
Provides-Extra: lightgbm
|
|
46
|
+
Requires-Dist: lightgbm<4.7,>=4.0; extra == "lightgbm"
|
|
57
47
|
Provides-Extra: catboost
|
|
58
48
|
Requires-Dist: numpy<2.3.0,>=1.25; extra == "catboost"
|
|
59
49
|
Requires-Dist: catboost<1.3,>=1.2; extra == "catboost"
|
|
50
|
+
Provides-Extra: xgboost
|
|
51
|
+
Requires-Dist: xgboost<3.1,>=2.0; extra == "xgboost"
|
|
52
|
+
Provides-Extra: realmlp
|
|
53
|
+
Requires-Dist: pytabkit<1.7,>=1.6; extra == "realmlp"
|
|
54
|
+
Provides-Extra: interpret
|
|
55
|
+
Requires-Dist: interpret-core<0.8,>=0.7.2; extra == "interpret"
|
|
60
56
|
Provides-Extra: fastai
|
|
61
57
|
Requires-Dist: spacy<3.9; extra == "fastai"
|
|
62
|
-
Requires-Dist: torch<2.
|
|
58
|
+
Requires-Dist: torch<2.10,>=2.6; extra == "fastai"
|
|
63
59
|
Requires-Dist: fastai<2.9,>=2.3.1; extra == "fastai"
|
|
64
|
-
Provides-Extra:
|
|
65
|
-
Requires-Dist:
|
|
66
|
-
Provides-Extra:
|
|
67
|
-
Requires-Dist:
|
|
60
|
+
Provides-Extra: tabm
|
|
61
|
+
Requires-Dist: torch<2.10,>=2.6; extra == "tabm"
|
|
62
|
+
Provides-Extra: tabpfn
|
|
63
|
+
Requires-Dist: tabpfn<2.2,>=2.0.9; extra == "tabpfn"
|
|
64
|
+
Provides-Extra: tabpfnmix
|
|
65
|
+
Requires-Dist: torch<2.10,>=2.6; extra == "tabpfnmix"
|
|
66
|
+
Requires-Dist: huggingface_hub[torch]<1.0; extra == "tabpfnmix"
|
|
67
|
+
Requires-Dist: einops<0.9,>=0.7; extra == "tabpfnmix"
|
|
68
|
+
Provides-Extra: mitra
|
|
69
|
+
Requires-Dist: loguru; extra == "mitra"
|
|
70
|
+
Requires-Dist: einx; extra == "mitra"
|
|
71
|
+
Requires-Dist: omegaconf; extra == "mitra"
|
|
72
|
+
Requires-Dist: torch<2.10,>=2.6; extra == "mitra"
|
|
73
|
+
Requires-Dist: transformers; extra == "mitra"
|
|
74
|
+
Requires-Dist: huggingface_hub[torch]<1.0; extra == "mitra"
|
|
75
|
+
Requires-Dist: einops<0.9,>=0.7; extra == "mitra"
|
|
76
|
+
Provides-Extra: tabicl
|
|
77
|
+
Requires-Dist: tabicl<0.2,>=0.1.3; extra == "tabicl"
|
|
68
78
|
Provides-Extra: ray
|
|
69
|
-
Requires-Dist: autogluon.core[all]==1.
|
|
79
|
+
Requires-Dist: autogluon.core[all]==1.4.1b20251214; extra == "ray"
|
|
70
80
|
Provides-Extra: skex
|
|
71
81
|
Requires-Dist: scikit-learn-intelex<2025.5,>=2024.0; extra == "skex"
|
|
82
|
+
Provides-Extra: imodels
|
|
83
|
+
Requires-Dist: imodels<2.1.0,>=1.3.10; extra == "imodels"
|
|
72
84
|
Provides-Extra: skl2onnx
|
|
73
|
-
Requires-Dist: skl2onnx<1.
|
|
74
|
-
Requires-Dist:
|
|
75
|
-
Requires-Dist:
|
|
76
|
-
Requires-Dist:
|
|
77
|
-
Requires-Dist:
|
|
78
|
-
Provides-Extra:
|
|
79
|
-
Requires-Dist:
|
|
80
|
-
|
|
81
|
-
Requires-Dist: torch<
|
|
82
|
-
Requires-Dist:
|
|
83
|
-
Requires-Dist: einops<0.9,>=0.7; extra == "
|
|
85
|
+
Requires-Dist: skl2onnx<1.20.0,>=1.15.0; extra == "skl2onnx"
|
|
86
|
+
Requires-Dist: onnx!=1.16.2,<1.21.0,>=1.13.0; platform_system == "Windows" and extra == "skl2onnx"
|
|
87
|
+
Requires-Dist: onnx<1.21.0,>=1.13.0; platform_system != "Windows" and extra == "skl2onnx"
|
|
88
|
+
Requires-Dist: onnxruntime<1.24.0,>=1.17.0; extra == "skl2onnx"
|
|
89
|
+
Requires-Dist: onnxruntime-gpu<1.24.0,>=1.17.0; (platform_system != "Darwin" and platform_machine != "aarch64") and extra == "skl2onnx"
|
|
90
|
+
Provides-Extra: all
|
|
91
|
+
Requires-Dist: numpy<2.3.0,>=1.25; extra == "all"
|
|
92
|
+
Requires-Dist: xgboost<3.1,>=2.0; extra == "all"
|
|
93
|
+
Requires-Dist: huggingface_hub[torch]<1.0; extra == "all"
|
|
94
|
+
Requires-Dist: omegaconf; extra == "all"
|
|
95
|
+
Requires-Dist: einops<0.9,>=0.7; extra == "all"
|
|
96
|
+
Requires-Dist: lightgbm<4.7,>=4.0; extra == "all"
|
|
97
|
+
Requires-Dist: transformers; extra == "all"
|
|
98
|
+
Requires-Dist: autogluon.core[all]==1.4.1b20251214; extra == "all"
|
|
99
|
+
Requires-Dist: torch<2.10,>=2.6; extra == "all"
|
|
100
|
+
Requires-Dist: spacy<3.9; extra == "all"
|
|
101
|
+
Requires-Dist: einx; extra == "all"
|
|
102
|
+
Requires-Dist: loguru; extra == "all"
|
|
103
|
+
Requires-Dist: catboost<1.3,>=1.2; extra == "all"
|
|
104
|
+
Requires-Dist: fastai<2.9,>=2.3.1; extra == "all"
|
|
105
|
+
Provides-Extra: tabarena
|
|
106
|
+
Requires-Dist: numpy<2.3.0,>=1.25; extra == "tabarena"
|
|
107
|
+
Requires-Dist: xgboost<3.1,>=2.0; extra == "tabarena"
|
|
108
|
+
Requires-Dist: huggingface_hub[torch]<1.0; extra == "tabarena"
|
|
109
|
+
Requires-Dist: omegaconf; extra == "tabarena"
|
|
110
|
+
Requires-Dist: einops<0.9,>=0.7; extra == "tabarena"
|
|
111
|
+
Requires-Dist: tabpfn<2.2,>=2.0.9; extra == "tabarena"
|
|
112
|
+
Requires-Dist: lightgbm<4.7,>=4.0; extra == "tabarena"
|
|
113
|
+
Requires-Dist: transformers; extra == "tabarena"
|
|
114
|
+
Requires-Dist: autogluon.core[all]==1.4.1b20251214; extra == "tabarena"
|
|
115
|
+
Requires-Dist: interpret-core<0.8,>=0.7.2; extra == "tabarena"
|
|
116
|
+
Requires-Dist: torch<2.10,>=2.6; extra == "tabarena"
|
|
117
|
+
Requires-Dist: tabicl<0.2,>=0.1.3; extra == "tabarena"
|
|
118
|
+
Requires-Dist: pytabkit<1.7,>=1.6; extra == "tabarena"
|
|
119
|
+
Requires-Dist: spacy<3.9; extra == "tabarena"
|
|
120
|
+
Requires-Dist: einx; extra == "tabarena"
|
|
121
|
+
Requires-Dist: loguru; extra == "tabarena"
|
|
122
|
+
Requires-Dist: catboost<1.3,>=1.2; extra == "tabarena"
|
|
123
|
+
Requires-Dist: fastai<2.9,>=2.3.1; extra == "tabarena"
|
|
84
124
|
Provides-Extra: tests
|
|
85
|
-
Requires-Dist:
|
|
86
|
-
Requires-Dist:
|
|
125
|
+
Requires-Dist: interpret-core<0.8,>=0.7.2; extra == "tests"
|
|
126
|
+
Requires-Dist: tabicl<0.2,>=0.1.3; extra == "tests"
|
|
127
|
+
Requires-Dist: tabpfn<2.2,>=2.0.9; extra == "tests"
|
|
128
|
+
Requires-Dist: pytabkit<1.7,>=1.6; extra == "tests"
|
|
129
|
+
Requires-Dist: torch<2.10,>=2.6; extra == "tests"
|
|
130
|
+
Requires-Dist: huggingface_hub[torch]<1.0; extra == "tests"
|
|
87
131
|
Requires-Dist: einops<0.9,>=0.7; extra == "tests"
|
|
88
132
|
Requires-Dist: imodels<2.1.0,>=1.3.10; extra == "tests"
|
|
89
|
-
Requires-Dist: skl2onnx<1.
|
|
90
|
-
Requires-Dist:
|
|
91
|
-
Requires-Dist:
|
|
92
|
-
Requires-Dist:
|
|
93
|
-
Requires-Dist:
|
|
94
|
-
|
|
95
|
-
|
|
133
|
+
Requires-Dist: skl2onnx<1.20.0,>=1.15.0; extra == "tests"
|
|
134
|
+
Requires-Dist: onnx!=1.16.2,<1.21.0,>=1.13.0; platform_system == "Windows" and extra == "tests"
|
|
135
|
+
Requires-Dist: onnx<1.21.0,>=1.13.0; platform_system != "Windows" and extra == "tests"
|
|
136
|
+
Requires-Dist: onnxruntime<1.24.0,>=1.17.0; extra == "tests"
|
|
137
|
+
Requires-Dist: onnxruntime-gpu<1.24.0,>=1.17.0; (platform_system != "Darwin" and platform_machine != "aarch64") and extra == "tests"
|
|
138
|
+
Dynamic: author
|
|
139
|
+
Dynamic: classifier
|
|
140
|
+
Dynamic: description
|
|
141
|
+
Dynamic: description-content-type
|
|
142
|
+
Dynamic: home-page
|
|
143
|
+
Dynamic: license
|
|
144
|
+
Dynamic: license-file
|
|
145
|
+
Dynamic: project-url
|
|
146
|
+
Dynamic: provides-extra
|
|
147
|
+
Dynamic: requires-dist
|
|
148
|
+
Dynamic: requires-python
|
|
149
|
+
Dynamic: summary
|
|
96
150
|
|
|
97
151
|
|
|
98
152
|
|
|
@@ -103,7 +157,7 @@ Requires-Dist: xgboost<3.1,>=2.0; extra == "xgboost"
|
|
|
103
157
|
|
|
104
158
|
[](https://github.com/autogluon/autogluon/releases)
|
|
105
159
|
[](https://anaconda.org/conda-forge/autogluon)
|
|
106
|
-
[](https://pypi.org/project/autogluon/)
|
|
107
161
|
[](https://pepy.tech/project/autogluon)
|
|
108
162
|
[](./LICENSE)
|
|
109
163
|
[](https://discord.gg/wjUmjqAc2N)
|
|
@@ -120,7 +174,7 @@ AutoGluon, developed by AWS AI, automates machine learning tasks enabling you to
|
|
|
120
174
|
|
|
121
175
|
## 💾 Installation
|
|
122
176
|
|
|
123
|
-
AutoGluon is supported on Python 3.
|
|
177
|
+
AutoGluon is supported on Python 3.10 - 3.13 and is available on Linux, MacOS, and Windows.
|
|
124
178
|
|
|
125
179
|
You can install AutoGluon with:
|
|
126
180
|
|
|
@@ -136,15 +190,15 @@ Build accurate end-to-end ML models in just 3 lines of code!
|
|
|
136
190
|
|
|
137
191
|
```python
|
|
138
192
|
from autogluon.tabular import TabularPredictor
|
|
139
|
-
predictor = TabularPredictor(label="class").fit("train.csv")
|
|
193
|
+
predictor = TabularPredictor(label="class").fit("train.csv", presets="best")
|
|
140
194
|
predictions = predictor.predict("test.csv")
|
|
141
195
|
```
|
|
142
196
|
|
|
143
197
|
| AutoGluon Task | Quickstart | API |
|
|
144
198
|
|:--------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------:|
|
|
145
199
|
| TabularPredictor | [](https://auto.gluon.ai/stable/tutorials/tabular/tabular-quick-start.html) | [](https://auto.gluon.ai/stable/api/autogluon.tabular.TabularPredictor.html) |
|
|
146
|
-
| MultiModalPredictor | [](https://auto.gluon.ai/stable/tutorials/multimodal/multimodal_prediction/multimodal-quick-start.html) | [](https://auto.gluon.ai/stable/api/autogluon.multimodal.MultiModalPredictor.html) |
|
|
147
200
|
| TimeSeriesPredictor | [](https://auto.gluon.ai/stable/tutorials/timeseries/forecasting-quick-start.html) | [](https://auto.gluon.ai/stable/api/autogluon.timeseries.TimeSeriesPredictor.html) |
|
|
201
|
+
| MultiModalPredictor | [](https://auto.gluon.ai/stable/tutorials/multimodal/multimodal_prediction/multimodal-quick-start.html) | [](https://auto.gluon.ai/stable/api/autogluon.multimodal.MultiModalPredictor.html) |
|
|
148
202
|
|
|
149
203
|
## :mag: Resources
|
|
150
204
|
|
|
@@ -167,7 +221,10 @@ Below is a curated list of recent tutorials and talks on AutoGluon. A comprehens
|
|
|
167
221
|
- [Benchmarking Multimodal AutoML for Tabular Data with Text Fields](https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/file/9bf31c7ff062936a96d3c8bd1f8f2ff3-Paper-round2.pdf) (*NeurIPS*, 2021) ([BibTeX](CITING.md#autogluonmultimodal))
|
|
168
222
|
- [XTab: Cross-table Pretraining for Tabular Transformers](https://proceedings.mlr.press/v202/zhu23k/zhu23k.pdf) (*ICML*, 2023)
|
|
169
223
|
- [AutoGluon-TimeSeries: AutoML for Probabilistic Time Series Forecasting](https://arxiv.org/abs/2308.05566) (*AutoML Conf*, 2023) ([BibTeX](CITING.md#autogluontimeseries))
|
|
170
|
-
- [TabRepo: A Large Scale Repository of Tabular Model Evaluations and its AutoML Applications](https://arxiv.org/pdf/2311.02971.pdf) (*
|
|
224
|
+
- [TabRepo: A Large Scale Repository of Tabular Model Evaluations and its AutoML Applications](https://arxiv.org/pdf/2311.02971.pdf) (*AutoML Conf*, 2024)
|
|
225
|
+
- [AutoGluon-Multimodal (AutoMM): Supercharging Multimodal AutoML with Foundation Models](https://arxiv.org/pdf/2404.16233) (*AutoML Conf*, 2024) ([BibTeX](CITING.md#autogluonmultimodal))
|
|
226
|
+
- [Multi-layer Stack Ensembles for Time Series Forecasting](https://arxiv.org/abs/2511.15350) (*AutoML Conf*, 2025) ([BibTeX](CITING.md#autogluontimeseries))
|
|
227
|
+
- [Chronos-2: From Univariate to Universal Forecasting](https://arxiv.org/abs/2510.15821) (*Arxiv*, 2025) ([BibTeX](CITING.md#autogluontimeseries))
|
|
171
228
|
|
|
172
229
|
### Articles
|
|
173
230
|
- [AutoGluon-TimeSeries: Every Time Series Forecasting Model In One Library](https://towardsdatascience.com/autogluon-timeseries-every-time-series-forecasting-model-in-one-library-29a3bf6879db) (*Towards Data Science*, Jan 2024)
|
|
@@ -193,5 +250,3 @@ We are actively accepting code contributions to the AutoGluon project. If you ar
|
|
|
193
250
|
## :classical_building: License
|
|
194
251
|
|
|
195
252
|
This library is licensed under the Apache 2.0 License.
|
|
196
|
-
|
|
197
|
-
|