RNN
Elman proposed this classic recurrent neural network
(RNN
)
in 1990, where each layer uses the following recurrent transformation:
where , is the hidden state of RNN layer for
time , is the input at time and
is the hidden state of the previous layer at ,
are static exogenous inputs,
historic exogenous, are future exogenous
available at the time of the prediction. The available activations are
tanh
, and relu
. The predictions are obtained by transforming the
hidden states into contexts , that are decoded
and adapted into through MLPs.
References
-Jeffrey L. Elman (1990). “Finding Structure in
Time”.
-Cho, K., van Merrienboer, B., Gülcehre, C., Bougares, F., Schwenk, H.,
& Bengio, Y. (2014). Learning phrase representations using RNN
encoder-decoder for statistical machine
translation.
source
RNN
RNN (h:int, input_size:int=-1, inference_input_size:int=-1, encoder_n_layers:int=2, encoder_hidden_size:int=200, encoder_activation:str='tanh', encoder_bias:bool=True, encoder_dropout:float=0.0, context_size:int=10, decoder_hidden_size:int=200, decoder_layers:int=2, futr_exog_list=None, hist_exog_list=None, stat_exog_list=None, loss=MAE(), valid_loss=None, max_steps:int=1000, learning_rate:float=0.001, num_lr_decays:int=-1, early_stop_patience_steps:int=-1, val_check_steps:int=100, batch_size=32, valid_batch_size:Optional[int]=None, scaler_type:str='robust', random_seed=1, num_workers_loader=0, drop_last_loader=False, optimizer=None, optimizer_kwargs=None, **trainer_kwargs)
*RNN
Multi Layer Elman RNN (RNN), with MLP decoder. The network has tanh
or
relu
non-linearities, it is trained using ADAM stochastic gradient
descent. The network accepts static, historic and future exogenous data.
Parameters:
h
: int, forecast horizon.
input_size
: int,
maximum sequence length for truncated train backpropagation. Default -1
uses all history.
inference_input_size
: int, maximum sequence
length for truncated inference. Default -1 uses all history.
encoder_n_layers
: int=2, number of layers for the RNN.
encoder_hidden_size
: int=200, units for the RNN’s hidden state
size.
encoder_activation
: str=tanh
, type of RNN activation from
tanh
or relu
.
encoder_bias
: bool=True, whether or not to use
biases b_ih, b_hh within RNN units.
encoder_dropout
: float=0.,
dropout regularization applied to RNN outputs.
context_size
:
int=10, size of context vector for each timestamp on the forecasting
window.
decoder_hidden_size
: int=200, size of hidden layer for the
MLP decoder.
decoder_layers
: int=2, number of layers for the MLP
decoder.
futr_exog_list
: str list, future exogenous columns.
hist_exog_list
: str list, historic exogenous columns.
stat_exog_list
: str list, static exogenous columns.
loss
:
PyTorch module, instantiated train loss class from losses
collection.
valid_loss
: PyTorch module=loss
, instantiated valid loss class from
losses
collection.
max_steps
: int=1000, maximum number of training steps.
learning_rate
: float=1e-3, Learning rate between (0, 1).
num_lr_decays
: int=-1, Number of learning rate decays, evenly
distributed across max_steps.
early_stop_patience_steps
: int=-1,
Number of validation iterations before early stopping.
val_check_steps
: int=100, Number of training steps between every
validation loss check.
batch_size
: int=32, number of
differentseries in each batch.
valid_batch_size
: int=None, number
of different series in each validation and test batch.
scaler_type
: str=‘robust’, type of scaler for temporal inputs
normalization see temporal
scalers.
random_seed
: int=1, random_seed for pytorch initializer and numpy
generators.
num_workers_loader
: int=os.cpu_count(), workers to be
used by TimeSeriesDataLoader
.
drop_last_loader
: bool=False, if
True TimeSeriesDataLoader
drops last non-full batch.
optimizer
:
Subclass of ‘torch.optim.Optimizer’, optional, user specified optimizer
instead of the default choice (Adam).
optimizer_kwargs
: dict,
optional, list of parameters used by the user specified optimizer
.
alias
: str, optional, Custom name of the model.
**trainer_kwargs
: int, keyword trainer arguments inherited from
PyTorch Lighning’s
trainer.
*
RNN.fit
RNN.fit (dataset, val_size=0, test_size=0, random_seed=None, distributed_config=None)
*Fit.
The fit
method, optimizes the neural network’s weights using the
initialization parameters (learning_rate
, batch_size
, …) and the
loss
function as defined during the initialization. Within fit
we
use a PyTorch Lightning Trainer
that inherits the initialization’s
self.trainer_kwargs
, to customize its inputs, see PL’s trainer
arguments.
The method is designed to be compatible with SKLearn-like classes and in particular to be compatible with the StatsForecast library.
By default the model
is not saving training checkpoints to protect
disk memory, to get them change enable_checkpointing=True
in
__init__
.
Parameters:
dataset
: NeuralForecast’s
TimeSeriesDataset
,
see
documentation.
val_size
: int, validation size for temporal cross-validation.
test_size
: int, test size for temporal cross-validation.
random_seed
: int=None, random_seed for pytorch initializer and numpy
generators, overwrites model.__init__’s.
*
RNN.predict
RNN.predict (dataset, step_size=1, random_seed=None, **data_module_kwargs)
*Predict.
Neural network prediction with PL’s Trainer
execution of
predict_step
.
Parameters:
dataset
: NeuralForecast’s
TimeSeriesDataset
,
see
documentation.
step_size
: int=1, Step size between each window.
random_seed
:
int=None, random_seed for pytorch initializer and numpy generators,
overwrites model.__init__’s.
**data_module_kwargs
: PL’s
TimeSeriesDataModule args, see
documentation.*
Usage Example
import numpy as np
import pandas as pd
import pytorch_lightning as pl
import matplotlib.pyplot as plt
from neuralforecast import NeuralForecast
from neuralforecast.models import RNN
from neuralforecast.losses.pytorch import MQLoss, DistributionLoss
from neuralforecast.utils import AirPassengersPanel, AirPassengersStatic
from neuralforecast.tsdataset import TimeSeriesDataset, TimeSeriesLoader
Y_train_df = AirPassengersPanel[AirPassengersPanel.ds<AirPassengersPanel['ds'].values[-12]] # 132 train
Y_test_df = AirPassengersPanel[AirPassengersPanel.ds>=AirPassengersPanel['ds'].values[-12]].reset_index(drop=True) # 12 test
fcst = NeuralForecast(
models=[RNN(h=12,
input_size=-1,
inference_input_size=24,
loss=MQLoss(level=[80, 90]),
scaler_type='robust',
encoder_n_layers=2,
encoder_hidden_size=128,
context_size=10,
decoder_hidden_size=128,
decoder_layers=2,
max_steps=300,
futr_exog_list=['y_[lag12]'],
#hist_exog_list=['y_[lag12]'],
stat_exog_list=['airline1'],
)
],
freq='M'
)
fcst.fit(df=Y_train_df, static_df=AirPassengersStatic, val_size=12)
forecasts = fcst.predict(futr_df=Y_test_df)
Y_hat_df = forecasts.reset_index(drop=False).drop(columns=['unique_id','ds'])
plot_df = pd.concat([Y_test_df, Y_hat_df], axis=1)
plot_df = pd.concat([Y_train_df, plot_df])
plot_df = plot_df[plot_df.unique_id=='Airline1'].drop('unique_id', axis=1)
plt.plot(plot_df['ds'], plot_df['y'], c='black', label='True')
plt.plot(plot_df['ds'], plot_df['RNN-median'], c='blue', label='median')
plt.fill_between(x=plot_df['ds'][-12:],
y1=plot_df['RNN-lo-90'][-12:].values,
y2=plot_df['RNN-hi-90'][-12:].values,
alpha=0.4, label='level 90')
plt.legend()
plt.grid()
plt.plot()