NLinear
NLinear is a simple and fast yet accurate time series forecasting model for long-horizon forecasting.
The architecture aims to boost the performance when there is a distribution shift in the dataset: 1. NLinear first subtracts the input by the last value of the sequence; 2. Then, the input goes through a linear layer, and the subtracted part is added back before making the final prediction.
1. NLinear
source
NLinear
NLinear (h:int, input_size:int, stat_exog_list=None, hist_exog_list=None, futr_exog_list=None, exclude_insample_y=False, loss=MAE(), valid_loss=None, max_steps:int=5000, learning_rate:float=0.0001, num_lr_decays:int=-1, early_stop_patience_steps:int=-1, val_check_steps:int=100, batch_size:int=32, valid_batch_size:Optional[int]=None, windows_batch_size=1024, inference_windows_batch_size=1024, start_padding_enabled=False, step_size:int=1, scaler_type:str='identity', random_seed:int=1, num_workers_loader:int=0, drop_last_loader:bool=False, optimizer=None, optimizer_kwargs=None, lr_scheduler=None, lr_scheduler_kwargs=None, **trainer_kwargs)
*NLinear
Parameters:
h
: int, forecast horizon.
input_size
: int,
maximum sequence length for truncated train backpropagation. Default -1
uses all history.
futr_exog_list
: str list, future exogenous
columns.
hist_exog_list
: str list, historic exogenous columns.
stat_exog_list
: str list, static exogenous columns.
exclude_insample_y
: bool=False, the model skips the autoregressive
features y[t-input_size:t] if True.
loss
: PyTorch module,
instantiated train loss class from losses
collection.
max_steps
: int=1000, maximum number of training steps.
learning_rate
: float=1e-3, Learning rate between (0, 1).
num_lr_decays
: int=-1, Number of learning rate decays, evenly
distributed across max_steps.
early_stop_patience_steps
: int=-1,
Number of validation iterations before early stopping.
val_check_steps
: int=100, Number of training steps between every
validation loss check.
batch_size
: int=32, number of different
series in each batch.
valid_batch_size
: int=None, number of
different series in each validation and test batch, if None uses
batch_size.
windows_batch_size
: int=1024, number of windows to
sample in each training batch, default uses all.
inference_windows_batch_size
: int=1024, number of windows to sample in
each inference batch.
start_padding_enabled
: bool=False, if True,
the model will pad the time series with zeros at the beginning, by input
size.
scaler_type
: str=‘robust’, type of scaler for temporal
inputs normalization see temporal
scalers.
random_seed
: int=1, random_seed for pytorch initializer and numpy
generators.
num_workers_loader
: int=os.cpu_count(), workers to be
used by TimeSeriesDataLoader
.
drop_last_loader
: bool=False, if
True TimeSeriesDataLoader
drops last non-full batch.
alias
: str,
optional, Custom name of the model.
optimizer
: Subclass of
‘torch.optim.Optimizer’, optional, user specified optimizer instead of
the default choice (Adam).
optimizer_kwargs
: dict, optional, list
of parameters used by the user specified optimizer
.
lr_scheduler
: Subclass of ‘torch.optim.lr_scheduler.LRScheduler’,
optional, user specified lr_scheduler instead of the default choice
(StepLR).
lr_scheduler_kwargs
: dict, optional, list of parameters
used by the user specified lr_scheduler
.
**trainer_kwargs
: int, keyword trainer arguments inherited from
PyTorch Lighning’s
trainer.
*References*<br/>
- Zeng, Ailing, et al. "Are transformers effective for time series forecasting?." Proceedings of the AAAI conference on artificial intelligence. Vol. 37. No. 9. 2023."*
NLinear.fit
NLinear.fit (dataset, val_size=0, test_size=0, random_seed=None, distributed_config=None)
*Fit.
The fit
method, optimizes the neural network’s weights using the
initialization parameters (learning_rate
, windows_batch_size
, …) and
the loss
function as defined during the initialization. Within fit
we use a PyTorch Lightning Trainer
that inherits the initialization’s
self.trainer_kwargs
, to customize its inputs, see PL’s trainer
arguments.
The method is designed to be compatible with SKLearn-like classes and in particular to be compatible with the StatsForecast library.
By default the model
is not saving training checkpoints to protect
disk memory, to get them change enable_checkpointing=True
in
__init__
.
Parameters:
dataset
: NeuralForecast’s
TimeSeriesDataset
,
see
documentation.
val_size
: int, validation size for temporal cross-validation.
random_seed
: int=None, random_seed for pytorch initializer and numpy
generators, overwrites model.__init__’s.
test_size
: int, test
size for temporal cross-validation.
*
NLinear.predict
NLinear.predict (dataset, test_size=None, step_size=1, random_seed=None, **data_module_kwargs)
*Predict.
Neural network prediction with PL’s Trainer
execution of
predict_step
.
Parameters:
dataset
: NeuralForecast’s
TimeSeriesDataset
,
see
documentation.
test_size
: int=None, test size for temporal cross-validation.
step_size
: int=1, Step size between each window.
random_seed
:
int=None, random_seed for pytorch initializer and numpy generators,
overwrites model.__init__’s.
**data_module_kwargs
: PL’s
TimeSeriesDataModule args, see
documentation.*
Usage Example
import numpy as np
import pandas as pd
import pytorch_lightning as pl
import matplotlib.pyplot as plt
from neuralforecast import NeuralForecast
from neuralforecast.models import MLP
from neuralforecast.losses.pytorch import MQLoss, DistributionLoss
from neuralforecast.tsdataset import TimeSeriesDataset
from neuralforecast.utils import AirPassengers, AirPassengersPanel, AirPassengersStatic, augment_calendar_df
AirPassengersPanel, calendar_cols = augment_calendar_df(df=AirPassengersPanel, freq='M')
Y_train_df = AirPassengersPanel[AirPassengersPanel.ds<AirPassengersPanel['ds'].values[-12]] # 132 train
Y_test_df = AirPassengersPanel[AirPassengersPanel.ds>=AirPassengersPanel['ds'].values[-12]].reset_index(drop=True) # 12 test
model = NLinear(h=12,
input_size=24,
loss=MAE(),
#loss=DistributionLoss(distribution='StudentT', level=[80, 90], return_params=True),
scaler_type='robust',
learning_rate=1e-3,
max_steps=500,
val_check_steps=50,
early_stop_patience_steps=2)
nf = NeuralForecast(
models=[model],
freq='M'
)
nf.fit(df=Y_train_df, static_df=AirPassengersStatic, val_size=12)
forecasts = nf.predict(futr_df=Y_test_df)
Y_hat_df = forecasts.reset_index(drop=False).drop(columns=['unique_id','ds'])
plot_df = pd.concat([Y_test_df, Y_hat_df], axis=1)
plot_df = pd.concat([Y_train_df, plot_df])
if model.loss.is_distribution_output:
plot_df = plot_df[plot_df.unique_id=='Airline1'].drop('unique_id', axis=1)
plt.plot(plot_df['ds'], plot_df['y'], c='black', label='True')
plt.plot(plot_df['ds'], plot_df['NLinear-median'], c='blue', label='median')
plt.fill_between(x=plot_df['ds'][-12:],
y1=plot_df['NLinear-lo-90'][-12:].values,
y2=plot_df['NLinear-hi-90'][-12:].values,
alpha=0.4, label='level 90')
plt.grid()
plt.legend()
plt.plot()
else:
plot_df = plot_df[plot_df.unique_id=='Airline1'].drop('unique_id', axis=1)
plt.plot(plot_df['ds'], plot_df['y'], c='black', label='True')
plt.plot(plot_df['ds'], plot_df['NLinear'], c='blue', label='Forecast')
plt.legend()
plt.grid()