When outliers are present in a dataset, they can disrupt the calculated summary statistics, such as the mean and standard deviation, leading the model to favor the outlier values and deviate from most observations. Consequently, models need help in achieving a balance between accurately accommodating outliers and performing well on normal data, resulting in improved overall performance on both types of data. Robust regression algorithms tackle this issue, explicitly accounting for outliers in the dataset.

In this notebook we will show how to fit robust NeuralForecast methods. We will:
- Installing NeuralForecast.
- Loading Noisy AirPassengers.
- Fit and predict robustified NeuralForecast.
- Plot and evaluate predictions.

You can run these experiments using GPU with Google Colab.

Open In Colab

1. Installing NeuralForecast

!pip install git+https://github.com/Nixtla/neuralforecast.git
import numpy as np
import pandas as pd
from sklearn import datasets

import matplotlib.pyplot as plt
from random import random
from random import randint
from random import seed

from neuralforecast import NeuralForecast
from neuralforecast.utils import AirPassengersDF

from neuralforecast.models import NHITS
from neuralforecast.losses.pytorch import MQLoss, DistributionLoss, HuberMQLoss

from neuralforecast.losses.numpy import mape, mqloss

2. Loading Noisy AirPassengers

For this example we will use the classic Box-Cox AirPassengers dataset that we will augment it by introducing outliers.

In particular, we will focus on introducing outliers to the target variable altering it to deviate from its original observation by a specified factor, such as 2-to-4 times the standard deviation.

# Original Box-Cox AirPassengers 
# as defined in neuralforecast.utils
Y_df = AirPassengersDF.copy() 
plt.plot(Y_df.y)
plt.ylabel('Monthly Passengers')
plt.xlabel('Timestamp [t]')
plt.grid()

# Here we add some artificial outliers to AirPassengers
seed(1)
for i in range(len(Y_df)):
    factor = randint(2, 4)
    if random() > 0.97:
        Y_df.y[i] += factor * Y_df.y.std()

plt.plot(Y_df.y)
plt.ylabel('Monthly Passengers + Noise')
plt.xlabel('Timestamp [t]')
plt.grid()

# Split datasets into train/test 
# Last 12 months for test
Y_train_df = Y_df.groupby('unique_id').head(-12).reset_index()
Y_test_df = Y_df.groupby('unique_id').tail(12).reset_index()
Y_test_df
indexunique_iddsy
01321.01960-01-31417.0
11331.01960-02-29391.0
21341.01960-03-31419.0
31351.01960-04-30461.0
41361.01960-05-31472.0
51371.01960-06-30535.0
61381.01960-07-31622.0
71391.01960-08-31606.0
81401.01960-09-30508.0
91411.01960-10-31461.0
101421.01960-11-30390.0
111431.01960-12-31432.0

3. Fit and predict robustified NeuralForecast

Huber MQ Loss

The Huber loss, employed in robust regression, is a loss function that exhibits reduced sensitivity to outliers in data when compared to the squared error loss. The Huber loss function is quadratic for small errors and linear for large errors. Here we will use a slight modification for probabilistic predictions. Feel free to play with the δ\delta parameter.

Dropout Regularization

The dropout technique is a regularization method used in neural networks to prevent overfitting. During training, dropout randomly sets a fraction of the input units or neurons in a layer to zero at each update, effectively “dropping out” those units. This means that the network cannot rely on any individual unit because it may be dropped out at any time. By doing so, dropout forces the network to learn more robust and generalizable representations by preventing units from co-adapting too much.

The dropout method, can help us to robustify the network to outliers in the auto-regressive features. You can explore it through the dropout_prob_theta parameter.

Fit NeuralForecast models

Using the NeuralForecast.fit method you can train a set of models to your dataset. You can define the forecasting horizon (12 in this example), and modify the hyperparameters of the model. For example, for the NHITS we changed the default hidden size for both encoder and decoders.

See the NHITS and MLP model documentation.

horizon = 12
quantiles = [0.1, 0.25, 0.5, 0.75, 0.9]

# Try different hyperparmeters to improve accuracy.
models = [NHITS(h=horizon,                           # Forecast horizon
                input_size=2 * horizon,              # Length of input sequence
                loss=HuberMQLoss(quantiles=quantiles),    # Robust Huber Loss
                valid_loss=MQLoss(quantiles=quantiles),   # Validation signal
                max_steps=500,                       # Number of steps to train
                dropout_prob_theta=0.6,              # Dropout to robustify vs outlier lag inputs
                #early_stop_patience_steps=2,        # Early stopping regularization patience
                val_check_steps=10,                  # Frequency of validation signal (affects early stopping)
                alias='Huber',
              ),
          NHITS(h=horizon,
                input_size=2 * horizon,
                loss=DistributionLoss(distribution='Normal', 
                                      quantiles=quantiles), # Classic Normal distribution
                valid_loss=MQLoss(quantiles=quantiles),
                max_steps=500,
                #early_stop_patience_steps=2,
                dropout_prob_theta=0.6,
                val_check_steps=10,
                alias='Normal',
              )
          ]
nf = NeuralForecast(models=models, freq='M')
nf.fit(df=Y_train_df)
Y_hat_df = nf.predict()
Global seed set to 1
Global seed set to 1
# By default NeuralForecast produces forecast intervals
# In this case the lo-x and high-x levels represent the 
# low and high bounds of the prediction accumulating x% probability
Y_hat_df = Y_hat_df.reset_index(drop=True)
Y_hat_df
dsHuber-lo-80.0Huber-lo-50.0Huber-medianHuber-hi-50.0Huber-hi-80.0NormalNormal-lo-80.0Normal-lo-50.0Normal-medianNormal-hi-50.0Normal-hi-80.0
01960-01-31392.046448397.029694402.842377412.451111420.441254370.554321-1246.564331-466.951233374.2606811214.3569341953.910645
11960-02-29389.217041398.078979411.310669426.811432462.116272431.134827-1016.146179-376.702576469.3893131195.8090821874.808838
21960-03-31434.787323446.318176456.515533468.646667486.479950469.221069-965.224670-268.812164455.8469851197.5291751983.282349
31960-04-30435.589081443.395844451.102966458.542328469.953857544.345642-1038.976440-251.178711535.8806151403.1799322100.239990
41960-05-31442.144714448.320862455.896271466.212524477.713348400.593628-1188.452881-417.007935426.5662841202.1062012103.583008
51960-06-30505.597168513.204590522.992188535.911987547.264099482.142883-1210.700195-386.704407484.9237671400.3973392142.133789
61960-07-31566.634033576.086548588.730164602.847534613.312256548.551086-1049.558838-299.192017578.7158201399.0258792226.514404
71960-08-31554.081116568.410767580.600281594.198730605.851440542.382874-1056.719116-310.321533543.1066891420.3883062138.160889
81960-09-30503.825928511.493469520.782532530.070435551.331299656.870056-957.937927-157.202362644.4643551539.1346442261.052490
91960-10-31438.602539445.856720454.243591462.382782487.070221662.375427-926.544312-206.266907650.1278081537.2924802253.246094
101960-11-30395.615570402.616699417.529083430.389435452.758911499.940247-1233.157471-397.680908492.3101201396.8037112209.155273
111960-12-31433.402496439.153748448.741119456.206573471.018433458.918365-1393.779053-589.960815468.1238711448.7442632284.202637

4. Plot and Evaluate Predictions

Finally, we plot the forecasts of both models againts the real values.

And evaluate the accuracy of the NHITS-Huber and NHITS-Normal forecasters.

fig, ax = plt.subplots(1, 1, figsize = (20, 7))
plot_df = pd.concat([Y_train_df, Y_hat_df]).set_index('ds') # Concatenate the train and forecast dataframes
plot_df[['y', 'Huber-median', 'Normal-median']].plot(ax=ax, linewidth=2)

ax.set_title('Noisy AirPassengers Forecast', fontsize=22)
ax.set_ylabel('Monthly Passengers', fontsize=20)
ax.set_xlabel('Timestamp [t]', fontsize=20)
ax.legend(prop={'size': 15})
ax.grid()

To evaluate the median predictions we use the mean average percentage error (MAPE), defined as follows:

MAPE(yτ,y^τ)=mean(yτy^τyτ)\mathrm{MAPE}(\mathbf{y}_{\tau}, \hat{\mathbf{y}}_{\tau}) = \mathrm{mean}\left(\frac{|\mathbf{y}_{\tau}-\hat{\mathbf{y}}_{\tau}|}{|\mathbf{y}_{\tau}|}\right)

#from neuralforecast.losses.numpy import mape, mqloss

huber_mae = mape(y=Y_test_df['y'], y_hat=Y_hat_df['Huber-median'])
normal_mae = mape(y=Y_test_df['y'], y_hat=Y_hat_df['Normal-median'])

print(f'Huber MAPE: {huber_mae:.1%}')
print(f'Normal MAPE: {normal_mae:.1%}')
Huber MAPE: 4.2%
Normal MAPE: 16.2%

To evaluate the coherent probabilistic predictions we use the Continuous Ranked Probability Score (CRPS), defined as follows:

CRPS(F^τ,yτ)=01QL(F^τ,yτ)qdq\mathrm{CRPS}(\hat{F}_{\tau},\mathbf{y}_{\tau}) = \int^{1}_{0} \mathrm{QL}(\hat{F}_{\tau}, y_{\tau})_{q} dq

As you can see robust regression improvements can reflect in the probabilistic forecasts too.

huber_qcols = ['Huber-lo-80.0', 'Huber-lo-50.0',    'Huber-median', 'Huber-hi-50.0', 'Huber-hi-80.0']
normal_qcols = ['Normal-lo-80.0', 'Normal-lo-50.0', 'Normal-median',    'Normal-hi-50.0', 'Normal-hi-80.0']

huber_crps = mqloss(y=Y_test_df['y'], y_hat=Y_hat_df[huber_qcols], 
                   quantiles=np.array(quantiles))
normal_crps = mqloss(y=Y_test_df['y'], y_hat=Y_hat_df[normal_qcols], 
                    quantiles=np.array(quantiles))

print(f'Huber CRPS: {huber_crps:.4}')
print(f'Normal CRPS: {normal_crps:.4}')
Huber CRPS: 6.139
Normal CRPS: 157.5

References