TimesNet
The TimesNet univariate model tackles the challenge of modeling multiple intraperiod and interperiod temporal variations.
The architecture has the following distinctive features: - An embedding layer that maps the input sequence into a latent space. - Transformation of 1D time seires into 2D tensors, based on periods found by FFT. - A convolutional Inception block that captures temporal variations at different scales and between periods.
References
- Haixu Wu and Tengge Hu and Yong Liu and Hang Zhou
and Jianmin Wang and Mingsheng Long. TimesNet: Temporal 2D-Variation
Modeling for General Time Series
Analysis - Based on the
implementation in https://github.com/thuml/Time-Series-Library (license:
https://github.com/thuml/Time-Series-Library/blob/main/LICENSE)
1. Auxiliary Functions
source
Inception_Block_V1
Inception_Block_V1
source
TimesBlock
TimesBlock
source
FFT_for_Period
2. TimesNet
source
TimesNet
*TimesNet
The TimesNet univariate model tackles the challenge of modeling multiple intraperiod and interperiod temporal variations.
Parameters
h
: int, Forecast horizon.
input_size
: int,
Length of input window (lags).
stat_exog_list
: list of str,
optional (default=None), Static exogenous columns.
hist_exog_list
: list of str, optional (default=None), Historic exogenous columns.
futr_exog_list
: list of str, optional (default=None), Future
exogenous columns.
exclude_insample_y
: bool (default=False), The
model skips the autoregressive features y[t-input_size:t] if True.
hidden_size
: int (default=64), Size of embedding for embedding and
encoders.
dropout
: float between [0, 1) (default=0.1), Dropout
for embeddings.
conv_hidden_size
: int (default=64), Channels of
the Inception block.
top_k
: int (default=5), Number of
periods.
num_kernels
: int (default=6), Number of kernels for the
Inception block.
encoder_layers
: int, (default=2), Number of
encoder layers.
loss
: PyTorch module (default=MAE()), Instantiated
train loss class from losses
collection.
valid_loss
: PyTorch module (default=None, uses loss), Instantiated
validation loss class from losses
collection.
max_steps
: int (default=1000), Maximum number of training steps.
learning_rate
: float (default=1e-4), Learning rate.
num_lr_decays
: int (default=-1), Number of learning rate decays,
evenly distributed across max_steps. If -1, no learning rate decay is
performed.
early_stop_patience_steps
: int (default=-1), Number of
validation iterations before early stopping. If -1, no early stopping is
performed.
val_check_steps
: int (default=100), Number of training
steps between every validation loss check.
batch_size
: int
(default=32), Number of different series in each batch.
valid_batch_size
: int (default=None), Number of different series in
each validation and test batch, if None uses batch_size.
windows_batch_size
: int (default=64), Number of windows to sample in
each training batch.
inference_windows_batch_size
: int
(default=256), Number of windows to sample in each inference batch.
start_padding_enabled
: bool (default=False), If True, the model will
pad the time series with zeros at the beginning by input size.
step_size
: int (default=1), Step size between each window of temporal
data.
scaler_type
: str (default=‘standard’), Type of scaler for
temporal inputs normalization see temporal
scalers.
random_seed
: int (default=1), Random_seed for pytorch initializer and
numpy generators.
drop_last_loader
: bool (default=False), If True
TimeSeriesDataLoader
drops last non-full batch.
alias
: str,
optional (default=None), Custom name of the model.
optimizer
:
Subclass of ‘torch.optim.Optimizer’, optional (default=None), User
specified optimizer instead of the default choice (Adam).
optimizer_kwargs
: dict, optional (defualt=None), List of parameters
used by the user specified optimizer
.
lr_scheduler
: Subclass of
‘torch.optim.lr_scheduler.LRScheduler’, optional, user specified
lr_scheduler instead of the default choice (StepLR).
lr_scheduler_kwargs
: dict, optional, list of parameters used by the
user specified lr_scheduler
.
dataloader_kwargs
: dict, optional (default=None), List of parameters
passed into the PyTorch Lightning dataloader by the
TimeSeriesDataLoader
.
**trainer_kwargs
: Keyword trainer
arguments inherited from PyTorch Lighning’s
trainer*
TimesNet.fit
*Fit.
The fit
method, optimizes the neural network’s weights using the
initialization parameters (learning_rate
, windows_batch_size
, …) and
the loss
function as defined during the initialization. Within fit
we use a PyTorch Lightning Trainer
that inherits the initialization’s
self.trainer_kwargs
, to customize its inputs, see PL’s trainer
arguments.
The method is designed to be compatible with SKLearn-like classes and in particular to be compatible with the StatsForecast library.
By default the model
is not saving training checkpoints to protect
disk memory, to get them change enable_checkpointing=True
in
__init__
.
Parameters:
dataset
: NeuralForecast’s
TimeSeriesDataset
,
see
documentation.
val_size
: int, validation size for temporal cross-validation.
random_seed
: int=None, random_seed for pytorch initializer and numpy
generators, overwrites model.__init__’s.
test_size
: int, test
size for temporal cross-validation.
*
TimesNet.predict
*Predict.
Neural network prediction with PL’s Trainer
execution of
predict_step
.
Parameters:
dataset
: NeuralForecast’s
TimeSeriesDataset
,
see
documentation.
test_size
: int=None, test size for temporal cross-validation.
step_size
: int=1, Step size between each window.
random_seed
:
int=None, random_seed for pytorch initializer and numpy generators,
overwrites model.__init__’s.
quantiles
: list of floats,
optional (default=None), target quantiles to predict.
**data_module_kwargs
: PL’s TimeSeriesDataModule args, see
documentation.*
Usage Example
Was this page helpful?