TSMixer
Time-Series Mixer (TSMixer
) is a MLP-based multivariate time-series forecasting model. TSMixer
jointly learns temporal and cross-sectional representations of the time-series by repeatedly combining time- and feature information using stacked mixing layers. A mixing layer consists of a sequential time- and feature Multi Layer Perceptron (MLP
). Note: this model cannot handle exogenous inputs. If you want to use additional exogenous inputs, use TSMixerx
.
1. Auxiliary Functions
1.1 Mixing layers
A mixing layer consists of a sequential time- and feature Multi Layer
Perceptron
(MLP
).
source
MixingLayer
MixingLayer
source
FeatureMixing
FeatureMixing
source
TemporalMixing
TemporalMixing
1.2 Reversible InstanceNormalization
An Instance Normalization Layer that is reversible, based on this
reference
implementation.
source
ReversibleInstanceNorm1d
ReversibleInstanceNorm1d
2. Model
source
TSMixer
*TSMixer
Time-Series Mixer
(TSMixer
)
is a MLP-based multivariate time-series forecasting model.
TSMixer
jointly learns temporal and cross-sectional representations of the
time-series by repeatedly combining time- and feature information using
stacked mixing layers. A mixing layer consists of a sequential time- and
feature Multi Layer Perceptron
(MLP
).
Parameters:
h
: int, forecast horizon.
input_size
: int,
considered autorregresive inputs (lags), y=[1,2,3,4] input_size=2 ->
lags=[1,2].
n_series
: int, number of time-series.
futr_exog_list
: str list, future exogenous columns.
hist_exog_list
: str list, historic exogenous columns.
stat_exog_list
: str list, static exogenous columns.
n_block
:
int=2, number of mixing layers in the model.
ff_dim
: int=64,
number of units for the second feed-forward layer in the feature
MLP.
dropout
: float=0.9, dropout rate between (0, 1) .
revin
: bool=True, if True uses Reverse Instance Normalization to
process inputs and outputs.
loss
: PyTorch module, instantiated
train loss class from losses
collection.
valid_loss
: PyTorch module=loss
, instantiated valid loss class from
losses
collection.
max_steps
: int=1000, maximum number of training steps.
learning_rate
: float=1e-3, Learning rate between (0, 1).
num_lr_decays
: int=-1, Number of learning rate decays, evenly
distributed across max_steps.
early_stop_patience_steps
: int=-1,
Number of validation iterations before early stopping.
val_check_steps
: int=100, Number of training steps between every
validation loss check.
batch_size
: int=32, number of different
series in each batch.
step_size
: int=1, step size between each
window of temporal data.
scaler_type
: str=‘identity’, type of
scaler for temporal inputs normalization see temporal
scalers.
random_seed
: int=1, random_seed for pytorch initializer and numpy
generators.
num_workers_loader
: int=os.cpu_count(), workers to be
used by TimeSeriesDataLoader
.
drop_last_loader
: bool=False, if
True TimeSeriesDataLoader
drops last non-full batch.
alias
: str,
optional, Custom name of the model.
optimizer
: Subclass of
‘torch.optim.Optimizer’, optional, user specified optimizer instead of
the default choice (Adam).
optimizer_kwargs
: dict, optional, list
of parameters used by the user specified optimizer
.
lr_scheduler
: Subclass of ‘torch.optim.lr_scheduler.LRScheduler’,
optional, user specified lr_scheduler instead of the default choice
(StepLR).
lr_scheduler_kwargs
: dict, optional, list of parameters
used by the user specified lr_scheduler
.
dataloader_kwargs
: dict, optional, list of parameters passed into the
PyTorch Lightning dataloader by the TimeSeriesDataLoader
.
**trainer_kwargs
: int, keyword trainer arguments inherited from
PyTorch Lighning’s
trainer.
TSMixer.fit
*Fit.
The fit
method, optimizes the neural network’s weights using the
initialization parameters (learning_rate
, windows_batch_size
, …) and
the loss
function as defined during the initialization. Within fit
we use a PyTorch Lightning Trainer
that inherits the initialization’s
self.trainer_kwargs
, to customize its inputs, see PL’s trainer
arguments.
The method is designed to be compatible with SKLearn-like classes and in particular to be compatible with the StatsForecast library.
By default the model
is not saving training checkpoints to protect
disk memory, to get them change enable_checkpointing=True
in
__init__
.
Parameters:
dataset
: NeuralForecast’s
TimeSeriesDataset
,
see
documentation.
val_size
: int, validation size for temporal cross-validation.
test_size
: int, test size for temporal cross-validation.
*
TSMixer.predict
*Predict.
Neural network prediction with PL’s Trainer
execution of
predict_step
.
Parameters:
dataset
: NeuralForecast’s
TimeSeriesDataset
,
see
documentation.
test_size
: int=None, test size for temporal cross-validation.
step_size
: int=1, Step size between each window.
**data_module_kwargs
: PL’s TimeSeriesDataModule args, see
documentation.*
3. Usage Examples
Train model and forecast future values with predict
method.
Using cross_validation
to forecast multiple historic values.