RNN)
in 1990, where each layer uses the following recurrent transformation:
where , is the hidden state of RNN layer  for
time ,  is the input at time  and
 is the hidden state of the previous layer at ,
 are static exogenous inputs, 
historic exogenous,  are future exogenous
available at the time of the prediction. The available activations are
tanh, and relu. The predictions are obtained by transforming the
hidden states into contexts , that are decoded
and adapted into  through MLPs.
References-Jeffrey L. Elman (1990). “Finding Structure in Time”.
-Cho, K., van Merrienboer, B., Gülcehre, C., Bougares, F., Schwenk, H., & Bengio, Y. (2014). Learning phrase representations using RNN encoder-decoder for statistical machine translation.
source
RNN
*RNN Multi Layer Elman RNN (RNN), with MLP decoder. The network has
tanh or
relu non-linearities, it is trained using ADAM stochastic gradient
descent. The network accepts static, historic and future exogenous data.
Parameters:h: int, forecast horizon.input_size: int,
maximum sequence length for truncated train backpropagation. Default -1
uses 3 * horizon inference_input_size: int, maximum sequence
length for truncated inference. Default None uses input_size
history.h_train: int, maximum sequence length for truncated train
backpropagation. Default 1.encoder_n_layers: int=2, number of
layers for the RNN.encoder_hidden_size: int=200, units for the
RNN’s hidden state size.encoder_activation: str=tanh, type of
RNN activation from tanh or relu.encoder_bias: bool=True,
whether or not to use biases b_ih, b_hh within RNN units.encoder_dropout: float=0., dropout regularization applied to RNN
outputs.context_size: deprecated.decoder_hidden_size:
int=200, size of hidden layer for the MLP decoder.decoder_layers:
int=2, number of layers for the MLP decoder.futr_exog_list: str
list, future exogenous columns.hist_exog_list: str list, historic
exogenous columns.stat_exog_list: str list, static exogenous
columns.exclude_insample_y: bool=False, whether to exclude the
target variable from the historic exogenous data.recurrent:
bool=False, whether to produce forecasts recursively (True) or direct
(False).loss: PyTorch module, instantiated train loss class from
losses
collection.valid_loss: PyTorch module=loss, instantiated valid loss class from
losses
collection.max_steps: int=1000, maximum number of training steps.learning_rate: float=1e-3, Learning rate between (0, 1).num_lr_decays: int=-1, Number of learning rate decays, evenly
distributed across max_steps.early_stop_patience_steps: int=-1,
Number of validation iterations before early stopping.val_check_steps: int=100, Number of training steps between every
validation loss check.batch_size: int=32, number of
differentseries in each batch.valid_batch_size: int=None, number
of different series in each validation and test batch.windows_batch_size: int=128, number of windows to sample in each
training batch, default uses all.inference_windows_batch_size:
int=1024, number of windows to sample in each inference batch, -1 uses
all.start_padding_enabled: bool=False, if True, the model will
pad the time series with zeros at the beginning, by input size.step_size: int=1, step size between each window of temporal
data.scaler_type: str=‘robust’, type of scaler for temporal inputs
normalization see temporal
scalers.random_seed: int=1, random_seed for pytorch initializer and numpy
generators.drop_last_loader: bool=False, if True
TimeSeriesDataLoader drops last non-full batch.alias: str,
optional, Custom name of the model.optimizer: Subclass of
‘torch.optim.Optimizer’, optional, user specified optimizer instead of
the default choice (Adam).optimizer_kwargs: dict, optional, list
of parameters used by the user specified optimizer.lr_scheduler: Subclass of ‘torch.optim.lr_scheduler.LRScheduler’,
optional, user specified lr_scheduler instead of the default choice
(StepLR).lr_scheduler_kwargs: dict, optional, list of parameters
used by the user specified lr_scheduler.dataloader_kwargs: dict, optional, list of parameters passed into the
PyTorch Lightning dataloader by the TimeSeriesDataLoader. **trainer_kwargs: int, keyword trainer arguments inherited from
PyTorch Lighning’s
trainer.*
RNN.fit
*Fit. The
fit method, optimizes the neural network’s weights using the
initialization parameters (learning_rate, windows_batch_size, …) and
the loss function as defined during the initialization. Within fit
we use a PyTorch Lightning Trainer that inherits the initialization’s
self.trainer_kwargs, to customize its inputs, see PL’s trainer
arguments.
The method is designed to be compatible with SKLearn-like classes and in
particular to be compatible with the StatsForecast library.
By default the model is not saving training checkpoints to protect
disk memory, to get them change enable_checkpointing=True in
__init__.
Parameters:dataset: NeuralForecast’s
TimeSeriesDataset,
see
documentation.val_size: int, validation size for temporal cross-validation.random_seed: int=None, random_seed for pytorch initializer and numpy
generators, overwrites model.__init__’s.test_size: int, test
size for temporal cross-validation.*
RNN.predict
*Predict. Neural network prediction with PL’s
Trainer execution of
predict_step.
Parameters:dataset: NeuralForecast’s
TimeSeriesDataset,
see
documentation.test_size: int=None, test size for temporal cross-validation.step_size: int=1, Step size between each window.random_seed:
int=None, random_seed for pytorch initializer and numpy generators,
overwrites model.__init__’s.quantiles: list of floats,
optional (default=None), target quantiles to predict. **data_module_kwargs: PL’s TimeSeriesDataModule args, see
documentation.*

