DilatedRNN)
addresses common challenges of modeling long sequences like vanishing
gradients, computational efficiency, and improved model flexibility to
model complex relationships while maintaining its parsimony. The
DilatedRNN
builds a deep stack of RNN layers using skip conditions on the temporal
and the network’s depth dimensions. The temporal dilated recurrent skip
connections offer the capability to focus on multi-resolution inputs.The
predictions are obtained by transforming the hidden states into contexts
, that are decoded and adapted into
through MLPs.
where , is the hidden state for time ,
is the input at time and is the
hidden state of the previous layer at , are
static exogenous inputs, historic exogenous,
are future exogenous available at the time
of the prediction.
References-Shiyu Chang, et al. “Dilated Recurrent Neural Networks”.
-Yao Qin, et al. “A Dual-Stage Attention-Based recurrent neural network for time series prediction”.
-Kashif Rasul, et al. “Zalando Research: PyTorch Dilated Recurrent Neural Networks”.
source
DilatedRNN
*DilatedRNN Parameters:
h: int, forecast horizon.input_size: int,
maximum sequence length for truncated train backpropagation. Default -1
uses 3 * horizon inference_input_size: int, maximum sequence
length for truncated inference. Default None uses input_size
history.cell_type: str, type of RNN cell to use. Options: ‘GRU’,
‘RNN’, ‘LSTM’, ‘ResLSTM’, ‘AttentiveLSTM’.dilations: int list,
dilations betweem layers.encoder_hidden_size: int=200, units for
the RNN’s hidden state size.context_size: int=10, size of context
vector for each timestamp on the forecasting window.decoder_hidden_size: int=200, size of hidden layer for the MLP
decoder.decoder_layers: int=2, number of layers for the MLP
decoder.futr_exog_list: str list, future exogenous columns.hist_exog_list: str list, historic exogenous columns.stat_exog_list: str list, static exogenous columns.exclude_insample_y: bool=False, the model skips the autoregressive
features y[t-input_size:t] if True.loss: PyTorch module,
instantiated train loss class from losses
collection.valid_loss: PyTorch module=loss, instantiated valid loss class from
losses
collection.max_steps: int, maximum number of training steps.learning_rate:
float, Learning rate between (0, 1).num_lr_decays: int, Number of
learning rate decays, evenly distributed across max_steps.early_stop_patience_steps: int, Number of validation iterations before
early stopping.val_check_steps: int, Number of training steps
between every validation loss check.batch_size: int=32, number of
different series in each batch.valid_batch_size: int=None, number
of different series in each validation and test batch.windows_batch_size: int=128, number of windows to sample in each
training batch, default uses all.inference_windows_batch_size:
int=1024, number of windows to sample in each inference batch, -1 uses
all.start_padding_enabled: bool=False, if True, the model will
pad the time series with zeros at the beginning, by input size.step_size: int=1, step size between each window of temporal data.scaler_type: str=‘robust’, type of scaler for temporal inputs
normalization see temporal
scalers.random_seed: int=1, random_seed for pytorch initializer and numpy
generators.drop_last_loader: bool=False, if True
TimeSeriesDataLoader drops last non-full batch.alias: str,
optional, Custom name of the model.optimizer: Subclass of
‘torch.optim.Optimizer’, optional, user specified optimizer instead of
the default choice (Adam).optimizer_kwargs: dict, optional, list
of parameters used by the user specified optimizer.lr_scheduler: Subclass of ‘torch.optim.lr_scheduler.LRScheduler’,
optional, user specified lr_scheduler instead of the default choice
(StepLR).lr_scheduler_kwargs: dict, optional, list of parameters
used by the user specified lr_scheduler.dataloader_kwargs:
dict, optional, list of parameters passed into the PyTorch Lightning
dataloader by the TimeSeriesDataLoader. **trainer_kwargs: int,
keyword trainer arguments inherited from PyTorch Lighning’s
trainer.*
DilatedRNN.fit
*Fit. The
fit method, optimizes the neural network’s weights using the
initialization parameters (learning_rate, windows_batch_size, …) and
the loss function as defined during the initialization. Within fit
we use a PyTorch Lightning Trainer that inherits the initialization’s
self.trainer_kwargs, to customize its inputs, see PL’s trainer
arguments.
The method is designed to be compatible with SKLearn-like classes and in
particular to be compatible with the StatsForecast library.
By default the model is not saving training checkpoints to protect
disk memory, to get them change enable_checkpointing=True in
__init__.
Parameters:dataset: NeuralForecast’s
TimeSeriesDataset,
see
documentation.val_size: int, validation size for temporal cross-validation.random_seed: int=None, random_seed for pytorch initializer and numpy
generators, overwrites model.__init__’s.test_size: int, test
size for temporal cross-validation.*
DilatedRNN.predict
*Predict. Neural network prediction with PL’s
Trainer execution of
predict_step.
Parameters:dataset: NeuralForecast’s
TimeSeriesDataset,
see
documentation.test_size: int=None, test size for temporal cross-validation.step_size: int=1, Step size between each window.random_seed:
int=None, random_seed for pytorch initializer and numpy generators,
overwrites model.__init__’s.quantiles: list of floats,
optional (default=None), target quantiles to predict. **data_module_kwargs: PL’s TimeSeriesDataModule args, see
documentation.*

