Long-horizon forecasting is challenging because of the volatility of the predictions and the computational complexity. To solve this problem we created the NHITS model and made the code available NeuralForecast library. NHITS specializes its partial outputs in the different frequencies of the time series through hierarchical interpolation and multi-rate input processing.

In this notebook we show how to use NHITS on the ETTm2 benchmark dataset. This data set includes data points for 2 Electricity Transformers at 2 stations, including load, oil temperature.

We will show you how to load data, train, and perform automatic hyperparameter tuning, to achieve SoTA performance, outperforming even the latest Transformer architectures for a fraction of their computational cost (50x faster).

You can run these experiments using GPU with Google Colab.

Open In Colab

1. Installing NeuralForecast

!pip install neuralforecast datasetsforecast

2. Load ETTm2 Data

The LongHorizon class will automatically download the complete ETTm2 dataset and process it.

It return three Dataframes: Y_df contains the values for the target variables, X_df contains exogenous calendar features and S_df contains static features for each time-series (none for ETTm2). For this example we will only use Y_df.

If you want to use your own data just replace Y_df. Be sure to use a long format and have a simmilar structure than our data set.

import pandas as pd
from datasetsforecast.long_horizon import LongHorizon

# Change this to your own data to try the model
Y_df, _, _ = LongHorizon.load(directory='./', group='ETTm2')
Y_df['ds'] = pd.to_datetime(Y_df['ds'])

# For this excercise we are going to take 20% of the DataSet
n_time = len(Y_df.ds.unique())
val_size = int(.2 * n_time)
test_size = int(.2 * n_time)

Y_df.groupby('unique_id').head(2)
unique_iddsy
0HUFL2016-07-01 00:00:00-0.041413
1HUFL2016-07-01 00:15:00-0.185467
57600HULL2016-07-01 00:00:000.040104
57601HULL2016-07-01 00:15:00-0.214450
115200LUFL2016-07-01 00:00:000.695804
115201LUFL2016-07-01 00:15:000.434685
172800LULL2016-07-01 00:00:000.434430
172801LULL2016-07-01 00:15:000.428168
230400MUFL2016-07-01 00:00:00-0.599211
230401MUFL2016-07-01 00:15:00-0.658068
288000MULL2016-07-01 00:00:00-0.393536
288001MULL2016-07-01 00:15:00-0.659338
345600OT2016-07-01 00:00:001.018032
345601OT2016-07-01 00:15:000.980124
import matplotlib.pyplot as plt

# We are going to plot the temperature of the transformer 
# and marking the validation and train splits
u_id = 'HUFL'
x_plot = pd.to_datetime(Y_df[Y_df.unique_id==u_id].ds)
y_plot = Y_df[Y_df.unique_id==u_id].y.values

x_val = x_plot[n_time - val_size - test_size]
x_test = x_plot[n_time - test_size]

fig = plt.figure(figsize=(10, 5))
fig.tight_layout()

plt.plot(x_plot, y_plot)
plt.xlabel('Date', fontsize=17)
plt.ylabel('HUFL [15 min temperature]', fontsize=17)

plt.axvline(x_val, color='black', linestyle='-.')
plt.axvline(x_test, color='black', linestyle='-.')
plt.text(x_val, 5, '  Validation', fontsize=12)
plt.text(x_test, 5, '  Test', fontsize=12)

plt.grid()

3. Hyperparameter selection and forecasting

The AutoNHITS class will automatically perform hyperparamter tunning using Tune library, exploring a user-defined or default search space. Models are selected based on the error on a validation set and the best model is then stored and used during inference.

The AutoNHITS.default_config attribute contains a suggested hyperparameter space. Here, we specify a different search space following the paper’s hyperparameters. Notice that 1000 Stochastic Gradient Steps are enough to achieve SoTA performance. Feel free to play around with this space.

from ray import tune
from neuralforecast.auto import AutoNHITS
from neuralforecast.core import NeuralForecast
horizon = 96 # 24hrs = 4 * 15 min.

# Use your own config or AutoNHITS.default_config
nhits_config = {
       "learning_rate": tune.choice([1e-3]),                                     # Initial Learning rate
       "max_steps": tune.choice([1000]),                                         # Number of SGD steps
       "input_size": tune.choice([5 * horizon]),                                 # input_size = multiplier * horizon
       "batch_size": tune.choice([7]),                                           # Number of series in windows
       "windows_batch_size": tune.choice([256]),                                 # Number of windows in batch
       "n_pool_kernel_size": tune.choice([[2, 2, 2], [16, 8, 1]]),               # MaxPool's Kernelsize
       "n_freq_downsample": tune.choice([[168, 24, 1], [24, 12, 1], [1, 1, 1]]), # Interpolation expressivity ratios
       "activation": tune.choice(['ReLU']),                                      # Type of non-linear activation
       "n_blocks":  tune.choice([[1, 1, 1]]),                                    # Blocks per each 3 stacks
       "mlp_units":  tune.choice([[[512, 512], [512, 512], [512, 512]]]),        # 2 512-Layers per block for each stack
       "interpolation_mode": tune.choice(['linear']),                            # Type of multi-step interpolation
       "val_check_steps": tune.choice([100]),                                    # Compute validation every 100 epochs
       "random_seed": tune.randint(1, 10),
    }

Tip

Refer to https://docs.ray.io/en/latest/tune/index.html for more information on the different space options, such as lists and continous intervals.m

To instantiate AutoNHITS you need to define:

  • h: forecasting horizon
  • loss: training loss. Use the DistributionLoss to produce probabilistic forecasts.
  • config: hyperparameter search space. If None, the AutoNHITS class will use a pre-defined suggested hyperparameter space.
  • num_samples: number of configurations explored.
models = [AutoNHITS(h=horizon,
                    config=nhits_config, 
                    num_samples=5)]
INFO:torch.distributed.nn.jit.instantiator:Created a temporary directory at /tmp/tmp0ke5wzvj
INFO:torch.distributed.nn.jit.instantiator:Writing /tmp/tmp0ke5wzvj/_remote_module_non_scriptable.py

Fit the model by instantiating a NeuralForecast object with the following required parameters:

The cross_validation method allows you to simulate multiple historic forecasts, greatly simplifying pipelines by replacing for loops with fit and predict methods.

With time series data, cross validation is done by defining a sliding window across the historical data and predicting the period following it. This form of cross validation allows us to arrive at a better estimation of our model’s predictive abilities across a wider range of temporal instances while also keeping the data in the training set contiguous as is required by our models.

The cross_validation method will use the validation set for hyperparameter selection, and will then produce the forecasts for the test set.

nf = NeuralForecast(
    models=models,
    freq='15min')

Y_hat_df = nf.cross_validation(df=Y_df, val_size=val_size,
                               test_size=test_size, n_windows=None)
INFO:ray.tune.tune:Initializing Ray automatically.For cluster usage or custom Ray initialization, call `ray.init(...)` before `tune.run`.

4. Evaluate Results

The AutoNHITS class contains a results tune attribute that stores information of each configuration explored. It contains the validation loss and best validation hyperparameter.

nf.models[0].results.get_best_result().config
{'learning_rate': 0.001,
 'max_steps': 1000,
 'input_size': 480,
 'batch_size': 7,
 'windows_batch_size': 256,
 'n_pool_kernel_size': [2, 2, 2],
 'n_freq_downsample': [24, 12, 1],
 'activation': 'ReLU',
 'n_blocks': [1, 1, 1],
 'mlp_units': [[512, 512], [512, 512], [512, 512]],
 'interpolation_mode': 'linear',
 'check_val_every_n_epoch': 100,
 'random_seed': 4,
 'h': 96,
 'loss': MAE()}
y_true = Y_hat_df.y.values
y_hat = Y_hat_df['AutoNHITS'].values

n_series = len(Y_df.unique_id.unique())

y_true = y_true.reshape(n_series, -1, horizon)
y_hat = y_hat.reshape(n_series, -1, horizon)

print('Parsed results')
print('2. y_true.shape (n_series, n_windows, n_time_out):\t', y_true.shape)
print('2. y_hat.shape  (n_series, n_windows, n_time_out):\t', y_hat.shape)
Parsed results
2. y_true.shape (n_series, n_windows, n_time_out):   (7, 11425, 96)
2. y_hat.shape  (n_series, n_windows, n_time_out):   (7, 11425, 96)
fig, axs = plt.subplots(nrows=3, ncols=1, figsize=(10, 11))
fig.tight_layout()

series = ['HUFL','HULL','LUFL','LULL','MUFL','MULL','OT']
series_idx = 3

for idx, w_idx in enumerate([200, 300, 400]):
  axs[idx].plot(y_true[series_idx, w_idx,:],label='True')
  axs[idx].plot(y_hat[series_idx, w_idx,:],label='Forecast')
  axs[idx].grid()
  axs[idx].set_ylabel(series[series_idx]+f' window {w_idx}', 
                      fontsize=17)
  if idx==2:
    axs[idx].set_xlabel('Forecast Horizon', fontsize=17)
plt.legend()
plt.show()
plt.close()

Finally, we compute the test errors for the two metrics of interest:

MAE=1WindowsHorizonτyτy^τ\qquad MAE = \frac{1}{Windows * Horizon} \sum_{\tau} |y_{\tau} - \hat{y}_{\tau}| \qquad and MSE=1WindowsHorizonτ(yτy^τ)2\qquad MSE = \frac{1}{Windows * Horizon} \sum_{\tau} (y_{\tau} - \hat{y}_{\tau})^{2} \qquad

from neuralforecast.losses.numpy import mae, mse

print('MAE: ', mae(y_hat, y_true))
print('MSE: ', mse(y_hat, y_true))
MAE:  0.26096806135482414
MSE:  0.18279484416711375

For reference we can check the performance when compared to previous ‘state-of-the-art’ long-horizon Transformer-based forecasting methods from the NHITS paper. To recover or improve the paper results try setting hyperopt_max_evals=30 in Hyperparameter Tuning.

Mean Absolute Error (MAE):

HorizonNHITSAutoFormerInFormerARIMA
960.2550.3390.4530.301
1920.3050.3400.5630.345
3360.3460.3720.8870.386
7200.4260.4191.3880.445

Mean Squared Error (MSE):

HorizonNHITSAutoFormerInFormerARIMA
960.1760.2550.3650.225
1920.2450.2810.5330.298
3360.2950.3391.3630.370
7200.4010.4223.3790.478

References

Cristian Challu, Kin G. Olivares, Boris N. Oreshkin, Federico Garza, Max Mergenthaler-Canseco, Artur Dubrawski (2021). NHITS: Neural Hierarchical Interpolation for Time Series Forecasting. Accepted at AAAI 2023.