Deep-learning models are the state-of-the-art in time series forecasting. They have outperformed statistical and tree-based approaches in recent large-scale competitions, such as the M series, and are being increasingly adopted in industry. However, their performance is greatly affected by the choice of hyperparameters. Selecting the optimal configuration, a process called hyperparameter tuning, is essential to achieve the best performance.

The main steps of hyperparameter tuning are:

  1. Define training and validation sets.
  2. Define search space.
  3. Sample configurations with a search algorithm, train models, and evaluate them on the validation set.
  4. Select and store the best model.

With Neuralforecast, we automatize and simplify the hyperparameter tuning process with the Auto models. Every model in the library has an Auto version (for example, AutoNHITS, AutoTFT) which can perform automatic hyperparameter selection on default or user-defined search space.

The Auto models can be used with two backends: Ray’s Tune library and Optuna, with a user-friendly and simplified API, with most of their capabilities.

In this tutorial, we show in detail how to instantiate and train an AutoNHITS model with a custom search space with both Tune and Optuna backends, install and use HYPEROPT search algorithm, and use the model with optimal hyperparameters to forecast.

You can run these experiments using GPU with Google Colab.

Open In Colab

1. Install Neuralforecast

# !pip install neuralforecast hyperopt

2. Load Data

In this example we will use the AirPasengers, a popular dataset with monthly airline passengers in the US from 1949 to 1960. Load the data, available at our utils methods in the required format. See https://nixtla.github.io/neuralforecast/examples/data_format.html for more details on the data input format.

from neuralforecast.utils import AirPassengersDF

Y_df = AirPassengersDF
Y_df.head()
unique_iddsy
01.01949-01-31112.0
11.01949-02-28118.0
21.01949-03-31132.0
31.01949-04-30129.0
41.01949-05-31121.0

3. Ray’s Tune backend

First, we show how to use the Tune backend. This backend is based on Ray’s Tune library, which is a scalable framework for hyperparameter tuning. It is a popular library in the machine learning community, and it is used by many companies and research labs. If you plan to use the Optuna backend, you can skip this section.

3.a Define hyperparameter grid

Each Auto model contains a default search space that was extensively tested on multiple large-scale datasets. Search spaces are specified with dictionaries, where keys corresponds to the model’s hyperparameter and the value is a Tune function to specify how the hyperparameter will be sampled. For example, use randint to sample integers uniformly, and choice to sample values of a list.

3.a.1 Default hyperparameter grid

The default search space dictionary can be accessed through the get_default_config function of the Auto model. This is useful if you wish to use the default parameter configuration but want to change one or more hyperparameter spaces without changing the other default values.

To extract the default config, you need to define: * h: forecasting horizon. * backend: backend to use. * n_series: Optional, the number of unique time series, required only for Multivariate models.

In this example, we will use h=12 and we use ray as backend. We will use the default hyperparameter space but only change random_seed range and n_pool_kernel_size.

from ray import tune
from neuralforecast.auto import AutoNHITS

nhits_config = AutoNHITS.get_default_config(h = 12, backend="ray")                      # Extract the default hyperparameter settings
nhits_config["random_seed"] = tune.randint(1, 10)                                       # Random seed
nhits_config["n_pool_kernel_size"] = tune.choice([[2, 2, 2], [16, 8, 1]])               # MaxPool's Kernelsize

3.a.2 Custom hyperparameter grid

More generally, users can define fully customized search spaces tailored for particular datasets and tasks, by fully specifying a hyperparameter search space dictionary.

In the following example we are optimizing the learning_rate and two NHITS specific hyperparameters: n_pool_kernel_size and n_freq_downsample. Additionaly, we use the search space to modify default hyperparameters, such as max_steps and val_check_steps.

nhits_config = {
       "max_steps": 100,                                                         # Number of SGD steps
       "input_size": 24,                                                         # Size of input window
       "learning_rate": tune.loguniform(1e-5, 1e-1),                             # Initial Learning rate
       "n_pool_kernel_size": tune.choice([[2, 2, 2], [16, 8, 1]]),               # MaxPool's Kernelsize
       "n_freq_downsample": tune.choice([[168, 24, 1], [24, 12, 1], [1, 1, 1]]), # Interpolation expressivity ratios
       "val_check_steps": 50,                                                    # Compute validation every 50 steps
       "random_seed": tune.randint(1, 10),                                       # Random seed
    }

Important

Configuration dictionaries are not interchangeable between models since they have different hyperparameters. Refer to https://nixtla.github.io/neuralforecast/models.html for a complete list of each model’s hyperparameters.

3.b Instantiate Auto model

To instantiate an Auto model you need to define:

  • h: forecasting horizon.
  • loss: training and validation loss from neuralforecast.losses.pytorch.
  • config: hyperparameter search space. If None, the Auto class will use a pre-defined suggested hyperparameter space.
  • search_alg: search algorithm (from tune.search), default is random search. Refer to https://docs.ray.io/en/latest/tune/api_docs/suggestion.html for more information on the different search algorithm options.
  • backend: backend to use, default is ray. If optuna, the Auto class will use the Optuna backend.
  • num_samples: number of configurations explored.

In this example we set horizon h as 12, use the MAE loss for training and validation, and use the HYPEROPT search algorithm.

from ray.tune.search.hyperopt import HyperOptSearch
from neuralforecast.losses.pytorch import MAE
from neuralforecast.auto import AutoNHITS
model = AutoNHITS(h=12,
                  loss=MAE(),
                  config=nhits_config,
                  search_alg=HyperOptSearch(),
                  backend='ray',
                  num_samples=10)

Tip

The number of samples, num_samples, is a crucial parameter! Larger values will usually produce better results as we explore more configurations in the search space, but it will increase training times. Larger search spaces will usually require more samples. As a general rule, we recommend setting num_samples higher than 20. We set 10 in this example for demonstration purposes.

3.c Train model and predict with Core class

Next, we use the Neuralforecast class to train the Auto model. In this step, Auto models will automatically perform hyperparamter tuning training multiple models with different hyperparameters, producing the forecasts on the validation set, and evaluating them. The best configuration is selected based on the error on a validation set. Only the best model is stored and used during inference.

from neuralforecast import NeuralForecast

Use the val_size parameter of the fit method to control the length of the validation set. In this case we set the validation set as twice the forecasting horizon.

nf = NeuralForecast(models=[model], freq='M')
nf.fit(df=Y_df, val_size=24)
Global seed set to 8

The results of the hyperparameter tuning are available in the results attribute of the Auto model. Use the get_dataframe method to get the results in a pandas dataframe.

results = nf.models[0].results.get_dataframe()
results.head()
losstime_this_iter_sdonetimesteps_totalepisodes_totaltraining_iterationtrial_idexperiment_iddatetimestampconfig/input_sizeconfig/learning_rateconfig/lossconfig/max_stepsconfig/n_freq_downsampleconfig/n_pool_kernel_sizeconfig/random_seedconfig/val_check_stepsconfig/valid_losslogdir
021.1732043.645993FalseNaNNaN2e20dbd9bf62650f116914e18889bb96963c6b2022023-10-03_11-19-141696346354240.000415MAE()100[168, 24, 1][16, 8, 1]750MAE()/Users/cchallu/ray_results/_train_tune_2023-10…
133.8434263.756614FalseNaNNaN275e09199f62650f116914e18889bb96963c6b2022023-10-03_11-19-221696346362240.000068MAE()100[24, 12, 1][16, 8, 1]450MAE()/Users/cchallu/ray_results/_train_tune_2023-10…
217.7502808.573898FalseNaNNaN20dc5925af62650f116914e18889bb96963c6b2022023-10-03_11-19-361696346376240.001615MAE()100[1, 1, 1][2, 2, 2]850MAE()/Users/cchallu/ray_results/_train_tune_2023-10…
324.5730556.987517FalseNaNNaN2352e03fff62650f116914e18889bb96963c6b2022023-10-03_11-19-501696346390240.003405MAE()100[1, 1, 1][2, 2, 2]550MAE()/Users/cchallu/ray_results/_train_tune_2023-10…
4474221.9375004.912362FalseNaNNaN2289bdd5ef62650f116914e18889bb96963c6b2022023-10-03_11-20-001696346400240.080117MAE()100[168, 24, 1][16, 8, 1]550MAE()/Users/cchallu/ray_results/_train_tune_2023-10…

Next, we use the predict method to forecast the next 12 months using the optimal hyperparameters.

Y_hat_df = nf.predict()
Y_hat_df = Y_hat_df.reset_index()
Y_hat_df.head()
Predicting DataLoader 0: 100%|██████████| 1/1 [00:00<00:00, 113.97it/s]
unique_iddsAutoNHITS
01.01961-01-31442.346680
11.01961-02-28439.409821
21.01961-03-31477.709930
31.01961-04-30503.884064
41.01961-05-31521.344421

4. Optuna backend

In this section we show how to use the Optuna backend. Optuna is a lightweight and versatile platform for hyperparameter optimization. If you plan to use the Tune backend, you can skip this section.

4.a Define hyperparameter grid

Each Auto model contains a default search space that was extensively tested on multiple large-scale datasets. Search spaces are specified with a function that returns a dictionary, where keys corresponds to the model’s hyperparameter and the value is a suggest function to specify how the hyperparameter will be sampled. For example, use suggest_int to sample integers uniformly, and suggest_categorical to sample values of a list. See https://optuna.readthedocs.io/en/stable/reference/generated/optuna.trial.Trial.html for more details.

4.a.1 Default hyperparameter grid

The default search space dictionary can be accessed through the get_default_config function of the Auto model. This is useful if you wish to use the default parameter configuration but want to change one or more hyperparameter spaces without changing the other default values.

To extract the default config, you need to define: * h: forecasting horizon. * backend: backend to use. * n_series: Optional, the number of unique time series, required only for Multivariate models.

In this example, we will use h=12 and we use optuna as backend. We will use the default hyperparameter space but only change random_seed range and n_pool_kernel_size.

import optuna
optuna.logging.set_verbosity(optuna.logging.WARNING) # Use this to disable training prints from optuna
nhits_default_config = AutoNHITS.get_default_config(h = 12, backend="optuna")                   # Extract the default hyperparameter settings

def config_nhits(trial):
    config = {**nhits_default_config(trial)}
    config.update({
                   "random_seed": trial.suggest_int("random_seed", 1, 10), 
                   "n_pool_kernel_size": trial.suggest_categorical("n_pool_kernel_size", [[2, 2, 2], [16, 8, 1]])
                   })
    return config

3.a.2 Custom hyperparameter grid

More generally, users can define fully customized search spaces tailored for particular datasets and tasks, by fully specifying a hyperparameter search space function.

In the following example we are optimizing the learning_rate and two NHITS specific hyperparameters: n_pool_kernel_size and n_freq_downsample. Additionaly, we use the search space to modify default hyperparameters, such as max_steps and val_check_steps.

def config_nhits(trial):
    return {
        "max_steps": 100,                                                                                               # Number of SGD steps
        "input_size": 24,                                                                                               # Size of input window
        "learning_rate": trial.suggest_loguniform("learning_rate", 1e-5, 1e-1),                                         # Initial Learning rate
        "n_pool_kernel_size": trial.suggest_categorical("n_pool_kernel_size", [[2, 2, 2], [16, 8, 1]]),                 # MaxPool's Kernelsize
        "n_freq_downsample": trial.suggest_categorical("n_freq_downsample", [[168, 24, 1], [24, 12, 1], [1, 1, 1]]),    # Interpolation expressivity ratios
        "val_check_steps": 50,                                                                                          # Compute validation every 50 steps
        "random_seed": trial.suggest_int("random_seed", 1, 10),                                                         # Random seed
    }

4.b Instantiate Auto model

To instantiate an Auto model you need to define:

  • h: forecasting horizon.
  • loss: training and validation loss from neuralforecast.losses.pytorch.
  • config: hyperparameter search space. If None, the Auto class will use a pre-defined suggested hyperparameter space.
  • search_alg: search algorithm (from optuna.samplers), default is TPESampler (Tree-structured Parzen Estimator). Refer to https://optuna.readthedocs.io/en/stable/reference/samplers/index.html for more information on the different search algorithm options.
  • backend: backend to use, default is ray. If optuna, the Auto class will use the Optuna backend.
  • num_samples: number of configurations explored.
model = AutoNHITS(h=12,
                  loss=MAE(),
                  config=config_nhits,
                  search_alg=optuna.samplers.TPESampler(),
                  backend='optuna',
                  num_samples=10)

Important

Configuration dictionaries and search algorithms for Tune and Optuna are not interchangeable! Use the appropriate type of search algorithm and custom configuration dictionary for each backend.

4.c Train model and predict with Core class

Use the val_size parameter of the fit method to control the length of the validation set. In this case we set the validation set as twice the forecasting horizon.

nf = NeuralForecast(models=[model], freq='M')
nf.fit(df=Y_df, val_size=24)
Global seed set to 6
Global seed set to 6
Global seed set to 1
Global seed set to 1
Global seed set to 7
Global seed set to 4
Global seed set to 9
Global seed set to 8
Global seed set to 7
Global seed set to 7
Global seed set to 6

The results of the hyperparameter tuning are available in the results attribute of the Auto model. Use the trials_dataframe method to get the results in a pandas dataframe.

results = nf.models[0].results.trials_dataframe()
results.drop(columns='user_attrs_ALL_PARAMS')
numbervaluedatetime_startdatetime_completedurationparams_learning_rateparams_n_freq_downsampleparams_n_pool_kernel_sizeparams_random_seedstate
002.964735e+012023-10-23 19:13:30.2517192023-10-23 19:13:33.0070860 days 00:00:02.7553670.000074[24, 12, 1][2, 2, 2]2COMPLETE
112.790444e+032023-10-23 19:13:33.0074832023-10-23 19:13:35.8230890 days 00:00:02.8156060.026500[24, 12, 1][2, 2, 2]10COMPLETE
222.193000e+012023-10-23 19:13:35.8236072023-10-23 19:13:38.5994140 days 00:00:02.7758070.000337[168, 24, 1][2, 2, 2]7COMPLETE
331.147799e+082023-10-23 19:13:38.6001492023-10-23 19:13:41.4403070 days 00:00:02.8401580.059274[1, 1, 1][16, 8, 1]5COMPLETE
442.140740e+012023-10-23 19:13:41.4408332023-10-23 19:13:44.1848600 days 00:00:02.7440270.000840[168, 24, 1][16, 8, 1]5COMPLETE
551.606544e+012023-10-23 19:13:44.1852912023-10-23 19:13:46.9456720 days 00:00:02.7603810.005477[1, 1, 1][16, 8, 1]8COMPLETE
661.301640e+042023-10-23 19:13:46.9461082023-10-23 19:13:49.8056330 days 00:00:02.8595250.056746[1, 1, 1][16, 8, 1]3COMPLETE
774.972713e+012023-10-23 19:13:49.8062782023-10-23 19:13:52.5771800 days 00:00:02.7709020.000021[24, 12, 1][2, 2, 2]9COMPLETE
882.138879e+012023-10-23 19:13:52.5776782023-10-23 19:13:55.3727920 days 00:00:02.7951140.007136[1, 1, 1][2, 2, 2]9COMPLETE
992.094145e+012023-10-23 19:13:55.3731492023-10-23 19:13:58.1250580 days 00:00:02.7519090.004655[1, 1, 1][2, 2, 2]6COMPLETE

Next, we use the predict method to forecast the next 12 months using the optimal hyperparameters.

Y_hat_df_optuna = nf.predict()
Y_hat_df_optuna = Y_hat_df_optuna.reset_index()
Y_hat_df_optuna.head()
Predicting DataLoader 0: 100%|██████████| 1/1 [00:00<00:00, 112.75it/s]
unique_iddsAutoNHITS
01.01961-01-31445.272858
11.01961-02-28469.633423
21.01961-03-31475.265289
31.01961-04-30483.228516
41.01961-05-31516.583496

5. Plots

Finally, we compare the forecasts produced by the AutoNHITS model with both backends.

import pandas as pd
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1, 1, figsize = (20, 7))
plot_df = pd.concat([Y_df, Y_hat_df]).reset_index()

plt.plot(plot_df['ds'], plot_df['y'], label='y')
plt.plot(plot_df['ds'], plot_df['AutoNHITS'], label='Ray')
plt.plot(Y_hat_df_optuna['ds'], Y_hat_df_optuna['AutoNHITS'], label='Optuna')

ax.set_title('AirPassengers Forecast', fontsize=22)
ax.set_ylabel('Monthly Passengers', fontsize=20)
ax.set_xlabel('Timestamp [t]', fontsize=20)
ax.legend(prop={'size': 15})
ax.grid()

References