TCN
For long time in deep learning, sequence modelling was synonymous with recurrent networks, yet several papers have shown that simple convolutional architectures can outperform canonical recurrent networks like LSTMs by demonstrating longer effective memory. By skipping temporal connections the causal convolution filters can be applied to larger time spans while remaining computationally efficient.
The predictions are obtained by transforming the hidden states into contexts , that are decoded and adapted into through MLPs.
where , is the hidden state for time , is the input at time and is the hidden state of the previous layer at , are static exogenous inputs, historic exogenous, are future exogenous available at the time of the prediction.
References
-van den Oord, A., Dieleman, S., Zen, H., Simonyan,
K., Vinyals, O., Graves, A., Kalchbrenner, N., Senior, A. W., &
Kavukcuoglu, K. (2016). Wavenet: A generative model for raw audio.
Computing Research Repository, abs/1609.03499. URL:
http://arxiv.org/abs/1609.03499.
arXiv:1609.03499.
-Shaojie Bai,
Zico Kolter, Vladlen Koltun. (2018). An Empirical Evaluation of Generic
Convolutional and Recurrent Networks for Sequence Modeling. Computing
Research Repository, abs/1803.01271. URL:
https://arxiv.org/abs/1803.01271.
source
TCN
*TCN
Temporal Convolution Network (TCN), with MLP decoder. The historical encoder uses dilated skip connections to obtain efficient long memory, while the rest of the architecture allows for future exogenous alignment.
Parameters:
h
: int, forecast horizon.
input_size
: int,
maximum sequence length for truncated train backpropagation. Default -1
uses 3 * horizon
inference_input_size
: int, maximum sequence
length for truncated inference. Default None uses input_size
history.
kernel_size
: int, size of the convolving kernel.
dilations
: int list, ontrols the temporal spacing between the kernel
points; also known as the à trous algorithm.
encoder_hidden_size
:
int=200, units for the TCN’s hidden state size.
encoder_activation
: str=tanh
, type of TCN activation from tanh
or
relu
.
context_size
: int=10, size of context vector for each
timestamp on the forecasting window.
decoder_hidden_size
: int=200,
size of hidden layer for the MLP decoder.
decoder_layers
: int=2,
number of layers for the MLP decoder.
futr_exog_list
: str list,
future exogenous columns.
hist_exog_list
: str list, historic
exogenous columns.
stat_exog_list
: str list, static exogenous
columns.
loss
: PyTorch module, instantiated train loss class from
losses
collection.
valid_loss
: PyTorch module=loss
, instantiated valid loss class from
losses
collection.
max_steps
: int=1000, maximum number of training steps.
learning_rate
: float=1e-3, Learning rate between (0, 1).
num_lr_decays
: int=-1, Number of learning rate decays, evenly
distributed across max_steps.
early_stop_patience_steps
: int=-1,
Number of validation iterations before early stopping.
val_check_steps
: int=100, Number of training steps between every
validation loss check.
batch_size
: int=32, number of
differentseries in each batch.
batch_size
: int=32, number of
differentseries in each batch.
valid_batch_size
: int=None, number
of different series in each validation and test batch.
windows_batch_size
: int=128, number of windows to sample in each
training batch, default uses all.
inference_windows_batch_size
:
int=1024, number of windows to sample in each inference batch, -1 uses
all.
start_padding_enabled
: bool=False, if True, the model will
pad the time series with zeros at the beginning, by input size.
step_size
: int=1, step size between each window of temporal
data.
scaler_type
: str=‘robust’, type of scaler for temporal inputs
normalization see temporal
scalers.
random_seed
: int=1, random_seed for pytorch initializer and numpy
generators.
drop_last_loader
: bool=False, if True
TimeSeriesDataLoader
drops last non-full batch.
alias
: str,
optional, Custom name of the model.
optimizer
: Subclass of
‘torch.optim.Optimizer’, optional, user specified optimizer instead of
the default choice (Adam).
optimizer_kwargs
: dict, optional, list
of parameters used by the user specified optimizer
.
lr_scheduler
: Subclass of ‘torch.optim.lr_scheduler.LRScheduler’,
optional, user specified lr_scheduler instead of the default choice
(StepLR).
lr_scheduler_kwargs
: dict, optional, list of parameters
used by the user specified lr_scheduler
.
dataloader_kwargs
: dict, optional, list of parameters passed into the
PyTorch Lightning dataloader by the TimeSeriesDataLoader
.
**trainer_kwargs
: int, keyword trainer arguments inherited from
PyTorch Lighning’s
trainer.
*
TCN.fit
*Fit.
The fit
method, optimizes the neural network’s weights using the
initialization parameters (learning_rate
, windows_batch_size
, …) and
the loss
function as defined during the initialization. Within fit
we use a PyTorch Lightning Trainer
that inherits the initialization’s
self.trainer_kwargs
, to customize its inputs, see PL’s trainer
arguments.
The method is designed to be compatible with SKLearn-like classes and in particular to be compatible with the StatsForecast library.
By default the model
is not saving training checkpoints to protect
disk memory, to get them change enable_checkpointing=True
in
__init__
.
Parameters:
dataset
: NeuralForecast’s
TimeSeriesDataset
,
see
documentation.
val_size
: int, validation size for temporal cross-validation.
random_seed
: int=None, random_seed for pytorch initializer and numpy
generators, overwrites model.__init__’s.
test_size
: int, test
size for temporal cross-validation.
*
TCN.predict
*Predict.
Neural network prediction with PL’s Trainer
execution of
predict_step
.
Parameters:
dataset
: NeuralForecast’s
TimeSeriesDataset
,
see
documentation.
test_size
: int=None, test size for temporal cross-validation.
step_size
: int=1, Step size between each window.
random_seed
:
int=None, random_seed for pytorch initializer and numpy generators,
overwrites model.__init__’s.
quantiles
: list of floats,
optional (default=None), target quantiles to predict.
**data_module_kwargs
: PL’s TimeSeriesDataModule args, see
documentation.*
Usage Example
Was this page helpful?