GRU
Cho et. al proposed the Gated Recurrent Unit
(GRU
)
to improve on LSTM and Elman cells. The predictions at each time are
given by a MLP decoder. This architecture follows closely the original
Multi Layer Elman
RNN
with the main difference being its use of the GRU cells. The predictions
are obtained by transforming the hidden states into contexts
, that are decoded and adapted into
through MLPs.
where , is the hidden state for time , is the input at time and is the hidden state of the previous layer at , are static exogenous inputs, historic exogenous, are future exogenous available at the time of the prediction.
References
-Junyoung Chung, Caglar Gulcehre, KyungHyun Cho,
Yoshua Bengio (2014). “Empirical Evaluation of Gated Recurrent Neural
Networks on Sequence Modeling”.
-Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, Yoshua Bengio
(2014). “On the Properties of Neural Machine Translation:
Encoder-Decoder Approaches”.
source
GRU
*GRU
Multi Layer Recurrent Network with Gated Units (GRU), and MLP decoder. The network has non-linear activation functions, it is trained using ADAM stochastic gradient descent. The network accepts static, historic and future exogenous data, flattens the inputs.
h
: int, forecast horizon.
input_size
: int, maximum sequence
length for truncated train backpropagation. Default -1 uses all
history.
inference_input_size
: int, maximum sequence length for
truncated inference. Default -1 uses all history.
encoder_n_layers
: int=2, number of layers for the GRU.
encoder_hidden_size
: int=200, units for the GRU’s hidden state
size.
encoder_activation
: Optional[str]=None, Deprecated.
Activation function in GRU is frozen in PyTorch.
encoder_bias
:
bool=True, whether or not to use biases b_ih, b_hh within GRU units.
encoder_dropout
: float=0., dropout regularization applied to GRU
outputs.
context_size
: int=10, size of context vector for each
timestamp on the forecasting window.
decoder_hidden_size
: int=200,
size of hidden layer for the MLP decoder.
decoder_layers
: int=2,
number of layers for the MLP decoder.
futr_exog_list
: str list,
future exogenous columns.
hist_exog_list
: str list, historic
exogenous columns.
stat_exog_list
: str list, static exogenous
columns.
loss
: PyTorch module, instantiated train loss class from
losses
collection.
valid_loss
: PyTorch module=loss
, instantiated valid loss class from
losses
collection.
max_steps
: int=1000, maximum number of training steps.
learning_rate
: float=1e-3, Learning rate between (0, 1).
num_lr_decays
: int=-1, Number of learning rate decays, evenly
distributed across max_steps.
early_stop_patience_steps
: int=-1,
Number of validation iterations before early stopping.
val_check_steps
: int=100, Number of training steps between every
validation loss check.
batch_size
: int=32, number of
differentseries in each batch.
valid_batch_size
: int=None, number
of different series in each validation and test batch.
scaler_type
: str=‘robust’, type of scaler for temporal inputs
normalization see temporal
scalers.
random_seed
: int=1, random_seed for pytorch initializer and numpy
generators.
num_workers_loader
: int=os.cpu_count(), workers to be
used by TimeSeriesDataLoader
.
drop_last_loader
: bool=False, if
True TimeSeriesDataLoader
drops last non-full batch.
alias
: str,
optional, Custom name of the model.
optimizer
: Subclass of
‘torch.optim.Optimizer’, optional, user specified optimizer instead of
the default choice (Adam).
optimizer_kwargs
: dict, optional, list
of parameters used by the user specified optimizer
.
lr_scheduler
: Subclass of ‘torch.optim.lr_scheduler.LRScheduler’,
optional, user specified lr_scheduler instead of the default choice
(StepLR).
lr_scheduler_kwargs
: dict, optional, list of parameters
used by the user specified lr_scheduler
.
dataloader_kwargs
:
dict, optional, list of parameters passed into the PyTorch Lightning
dataloader by the TimeSeriesDataLoader
.
**trainer_kwargs
: int,
keyword trainer arguments inherited from PyTorch Lighning’s
trainer.
*
GRU.fit
*Fit.
The fit
method, optimizes the neural network’s weights using the
initialization parameters (learning_rate
, batch_size
, …) and the
loss
function as defined during the initialization. Within fit
we
use a PyTorch Lightning Trainer
that inherits the initialization’s
self.trainer_kwargs
, to customize its inputs, see PL’s trainer
arguments.
The method is designed to be compatible with SKLearn-like classes and in particular to be compatible with the StatsForecast library.
By default the model
is not saving training checkpoints to protect
disk memory, to get them change enable_checkpointing=True
in
__init__
.
Parameters:
dataset
: NeuralForecast’s
TimeSeriesDataset
,
see
documentation.
val_size
: int, validation size for temporal cross-validation.
test_size
: int, test size for temporal cross-validation.
random_seed
: int=None, random_seed for pytorch initializer and numpy
generators, overwrites model.__init__’s.
*
GRU.predict
*Predict.
Neural network prediction with PL’s Trainer
execution of
predict_step
.
Parameters:
dataset
: NeuralForecast’s
TimeSeriesDataset
,
see
documentation.
step_size
: int=1, Step size between each window.
random_seed
:
int=None, random_seed for pytorch initializer and numpy generators,
overwrites model.__init__’s.
**data_module_kwargs
: PL’s
TimeSeriesDataModule args, see
documentation.*