When outliers are present in a dataset, they can disrupt the calculated summary statistics, such as the mean and standard deviation, leading the model to favor the outlier values and deviate from most observations. Consequently, models need help in achieving a balance between accurately accommodating outliers and performing well on normal data, resulting in improved overall performance on both types of data. Robust regression algorithms tackle this issue, explicitly accounting for outliers in the dataset.

In this notebook we will show how to fit robust NeuralForecast methods. We will:
- Installing NeuralForecast.
- Loading Noisy AirPassengers.
- Fit and predict robustified NeuralForecast.
- Plot and evaluate predictions.

You can run these experiments using GPU with Google Colab.

1. Installing NeuralForecast

!pip install neuralforecast
import logging
import numpy as np
import pandas as pd

import matplotlib.pyplot as plt
from random import random
from random import randint
from random import seed

from neuralforecast import NeuralForecast
from neuralforecast.utils import AirPassengersDF

from neuralforecast.models import NHITS
from neuralforecast.losses.pytorch import MQLoss, DistributionLoss, HuberMQLoss

from utilsforecast.losses import mape, mqloss
from utilsforecast.evaluation import evaluate
logging.getLogger("pytorch_lightning").setLevel(logging.ERROR)

2. Loading Noisy AirPassengers

For this example we will use the classic Box-Cox AirPassengers dataset that we will augment it by introducing outliers.

In particular, we will focus on introducing outliers to the target variable altering it to deviate from its original observation by a specified factor, such as 2-to-4 times the standard deviation.

# Original Box-Cox AirPassengers 
# as defined in neuralforecast.utils
Y_df = AirPassengersDF.copy() 
plt.plot(Y_df.y)
plt.ylabel('Monthly Passengers')
plt.xlabel('Timestamp [t]')
plt.grid()

# Here we add some artificial outliers to AirPassengers
seed(1)
for i in range(len(Y_df)):
    factor = randint(2, 4)
    if random() > 0.97:
        Y_df.loc[i, "y"] += factor * Y_df["y"].std()

plt.plot(Y_df.y)
plt.ylabel('Monthly Passengers + Noise')
plt.xlabel('Timestamp [t]')
plt.grid()

# Split datasets into train/test 
# Last 12 months for test
Y_train_df = Y_df.groupby('unique_id').head(-12)
Y_test_df = Y_df.groupby('unique_id').tail(12)
Y_test_df
unique_iddsy
1321.01960-01-31417.0
1331.01960-02-29391.0
1341.01960-03-31419.0
1351.01960-04-30461.0
1361.01960-05-31472.0
1371.01960-06-30535.0
1381.01960-07-31622.0
1391.01960-08-31606.0
1401.01960-09-30508.0
1411.01960-10-31461.0
1421.01960-11-30390.0
1431.01960-12-31432.0

3. Fit and predict robustified NeuralForecast

Huber MQ Loss

The Huber loss, employed in robust regression, is a loss function that exhibits reduced sensitivity to outliers in data when compared to the squared error loss. The Huber loss function is quadratic for small errors and linear for large errors. Here we will use a slight modification for probabilistic predictions. Feel free to play with the δ\delta parameter.

Dropout Regularization

The dropout technique is a regularization method used in neural networks to prevent overfitting. During training, dropout randomly sets a fraction of the input units or neurons in a layer to zero at each update, effectively “dropping out” those units. This means that the network cannot rely on any individual unit because it may be dropped out at any time. By doing so, dropout forces the network to learn more robust and generalizable representations by preventing units from co-adapting too much.

The dropout method, can help us to robustify the network to outliers in the auto-regressive features. You can explore it through the dropout_prob_theta parameter.

Fit NeuralForecast models

Using the NeuralForecast.fit method you can train a set of models to your dataset. You can define the forecasting horizon (12 in this example), and modify the hyperparameters of the model. For example, for the NHITS we changed the default hidden size for both encoder and decoders.

See the NHITS and MLP model documentation.

horizon = 12
level = [50, 80]

# Try different hyperparmeters to improve accuracy.
models = [NHITS(h=horizon,                           # Forecast horizon
                input_size=2 * horizon,              # Length of input sequence
                loss=HuberMQLoss(level=level),    # Robust Huber Loss
                valid_loss=MQLoss(level=level),   # Validation signal
                max_steps=500,                       # Number of steps to train
                dropout_prob_theta=0.6,              # Dropout to robustify vs outlier lag inputs
                #early_stop_patience_steps=2,        # Early stopping regularization patience
                val_check_steps=10,                  # Frequency of validation signal (affects early stopping)
                alias='Huber',
              ),
          NHITS(h=horizon,
                input_size=2 * horizon,
                loss=DistributionLoss(distribution='Normal', 
                                      level=level), # Classic Normal distribution
                valid_loss=MQLoss(level=level),
                max_steps=500,
                #early_stop_patience_steps=2,
                dropout_prob_theta=0.6,
                val_check_steps=10,
                alias='Normal',
              )
          ]
nf = NeuralForecast(models=models, freq='M')
nf.fit(df=Y_train_df)
Y_hat_df = nf.predict()
# By default NeuralForecast produces forecast intervals
# In this case the lo-x and high-x levels represent the 
# low and high bounds of the prediction accumulating x% probability
Y_hat_df
unique_iddsHuber-medianHuber-lo-80Huber-lo-50Huber-hi-50Huber-hi-80NormalNormal-medianNormal-lo-80Normal-lo-50Normal-hi-50Normal-hi-80
01.01960-01-31412.738525401.058044406.131958420.779266432.124268406.459717416.787842-124.278656135.413223680.997070904.871765
11.01960-02-29403.913544384.403534391.904419420.288208469.040375399.827148418.305725-137.291870103.988327661.940430946.699219
21.01960-03-31472.311523446.644531460.767334486.710999512.552979380.263947378.253998-105.411003117.415565647.887695883.611633
31.01960-04-30460.996674444.471039452.971802467.544189480.843903432.131378442.395844-104.205200135.457123729.306885974.661743
41.01960-05-31465.534790452.048889457.472626476.141022490.311005417.186279417.956543-117.399597150.915833692.936523930.934814
51.01960-06-30538.116028518.049866527.238159551.501709563.818848444.510834440.168396-54.501572189.301392703.502014946.068909
61.01960-07-31613.937866581.048035597.368408629.111450645.550659423.707275431.251526-97.069489164.821259687.764526942.432251
71.01960-08-31616.188660581.982300599.544128632.137512643.219543386.655823383.755157-134.702011139.954285658.973022897.393494
81.01960-09-30537.559143513.477478526.664856551.563293573.146667388.874817379.827057-139.859344110.772484673.086182926.355774
91.01960-10-31471.107605449.207916459.288025486.402985515.082458401.483643412.114990-185.92808595.805717703.490784970.837830
101.01960-11-30412.758423389.203308398.727295431.723602451.208588425.829895425.018799-172.022018108.840889723.4240111035.656128
111.01960-12-31457.254761438.565582446.097168468.809296483.967865406.916595399.852051-199.963684110.715050729.735107951.728577

4. Plot and Evaluate Predictions

Finally, we plot the forecasts of both models againts the real values.

And evaluate the accuracy of the NHITS-Huber and NHITS-Normal forecasters.

fig, ax = plt.subplots(1, 1, figsize = (20, 7))
plot_df = pd.concat([Y_train_df, Y_hat_df]).set_index('ds') # Concatenate the train and forecast dataframes
plot_df[['y', 'Huber-median', 'Normal-median']].plot(ax=ax, linewidth=2)

ax.set_title('Noisy AirPassengers Forecast', fontsize=22)
ax.set_ylabel('Monthly Passengers', fontsize=20)
ax.set_xlabel('Timestamp [t]', fontsize=20)
ax.legend(prop={'size': 15})
ax.grid()

To evaluate the median predictions we use the mean average percentage error (MAPE), defined as follows:

MAPE(yτ,y^τ)=mean(yτy^τyτ)\mathrm{MAPE}(\mathbf{y}_{\tau}, \hat{\mathbf{y}}_{\tau}) = \mathrm{mean}\left(\frac{|\mathbf{y}_{\tau}-\hat{\mathbf{y}}_{\tau}|}{|\mathbf{y}_{\tau}|}\right)

To evaluate the coherent probabilistic predictions we use the Continuous Ranked Probability Score (CRPS), defined as follows:

CRPS(F^τ,yτ)=01QL(F^τ,yτ)qdq\mathrm{CRPS}(\hat{F}_{\tau},\mathbf{y}_{\tau}) = \int^{1}_{0} \mathrm{QL}(\hat{F}_{\tau}, y_{\tau})_{q} dq

As you can see, robust regression improvements reflect in both the normal and probabilistic forecast setting.

df_metrics = Y_hat_df.merge(Y_test_df, on=['ds', 'unique_id'])
df_metrics.rename(columns={'Huber-median': 'Huber'}, inplace=True)

metrics = evaluate(df_metrics,
                   metrics=[mape, mqloss],
                   models=['Huber', 'Normal'],
                   level = [50, 80],
                   agg_fn="mean")

metrics
metricHuberNormal
0mape0.0347260.140207
1mqloss5.51153561.891651

References