The
BaseAuto class offers shared API connections to hyperparameter optimization
algorithms like
Optuna,
HyperOpt,
Dragonfly
among others through ray, which gives you access to grid search, bayesian
optimization and other state-of-the-art tools like
hyperband.
Comprehending the impacts of hyperparameters is still a
precious skill, as it can help guide the design of informed hyperparameter
spaces that are faster to explore automatically.

BaseAuto
LightningModule
Class for Automatic Hyperparameter Optimization, it builds on top of ray to
give access to a wide variety of hyperparameter optimization tools ranging
from classic grid search, to Bayesian optimization and HyperBand algorithm.
The validation loss to be optimized is defined by the config['loss'] dictionary
value, the config also contains the rest of the hyperparameter search space.
It is important to note that the success of this hyperparameter optimization
heavily relies on a strong correlation between the validation and test periods.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
cls_model | PyTorch/PyTorchLightning model | See neuralforecast.models collection here. | required |
h | int | Forecast horizon | required |
loss | PyTorch module | Instantiated train loss class from losses collection. | required |
valid_loss | PyTorch module | Instantiated valid loss class from losses collection. | required |
config | dict or callable | Dictionary with ray.tune defined search space or function that takes an optuna trial and returns a configuration dict. | required |
search_alg | ray.tune.search variant or optuna.sampler | For ray see https://docs.ray.io/en/latest/tune/api_docs/suggestion.html For optuna see https://optuna.readthedocs.io/en/stable/reference/samplers/index.html. | BasicVariantGenerator(random_state=1) |
num_samples | int | Number of hyperparameter optimization steps/samples. | 10 |
cpus | int | Number of cpus to use during optimization. Only used with ray tune. | cpu_count() |
gpus | int | Number of gpus to use during optimization, default all available. Only used with ray tune. | device_count() |
refit_with_val | bool | Refit of best model should preserve val_size. | False |
verbose | bool | Track progress. | False |
alias | str | Custom name of the model. | None |
backend | str | Backend to use for searching the hyperparameter space, can be either ‘ray’ or ‘optuna’. | ‘ray’ |
callbacks | list of callable | List of functions to call during the optimization process. ray reference: https://docs.ray.io/en/latest/tune/tutorials/tune-metrics.html optuna reference: https://optuna.readthedocs.io/en/stable | None |
BaseAuto.fit
config.
The optimization is performed on the TimeSeriesDataset using temporal cross validation with
the validation set that sequentially precedes the test set.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
dataset | NeuralForecast’s | NeuralForecast’s TimeSeriesDataset see details here | required |
val_size | int | Size of temporal validation set (needs to be bigger than 0). | 0 |
test_size | int | Size of temporal test set (default 0). | 0 |
random_seed | int | Random seed for hyperparameter exploration algorithms, not yet implemented. | None |
| Name | Type | Description |
|---|---|---|
self | Fitted instance of BaseAuto with best hyperparameters and results. |
BaseAuto.predict
| Name | Type | Description | Default |
|---|---|---|---|
dataset | NeuralForecast’s | NeuralForecast’s TimeSeriesDataset see details here | required |
step_size | int | Steps between sequential predictions, (default 1). | 1 |
h | int | Prediction horizon, if None, uses the model’s fitted horizon. Defaults to None. | None |
**data_kwarg | Additional parameters for the dataset module. | required |
| Name | Type | Description |
|---|---|---|
y_hat | Numpy predictions of the NeuralForecast model. |
Usage Example
References
- James Bergstra, Remi Bardenet, Yoshua Bengio, and Balazs Kegl (2011). “Algorithms for Hyper-Parameter Optimization”. In: Advances in Neural Information Processing Systems. url: https://proceedings.neurips.cc/paper/2011/file/86e8f7ab32cfd12577bc2619bc635690-Paper.pdf
- Kirthevasan Kandasamy, Karun Raju Vysyaraju, Willie Neiswanger, Biswajit Paria, Christopher R. Collins, Jeff Schneider, Barnabas Poczos, Eric P. Xing (2019). “Tuning Hyperparameters without Grad Students: Scalable and Robust Bayesian Optimisation with Dragonfly”. Journal of Machine Learning Research. url: https://arxiv.org/abs/1903.06694
- Lisha Li, Kevin Jamieson, Giulia DeSalvo, Afshin Rostamizadeh, Ameet Talwalkar (2016). “Hyperband: A Novel Bandit-Based Approach to Hyperparameter Optimization”. Journal of Machine Learning Research. url: https://arxiv.org/abs/1603.06560

