
1. TimeMixer
TimeMixer
BaseModel
TimeMixer
Args:
h (int): Forecast horizon.
input_size (int): autorregresive inputs size, y=[1,2,3,4] input_size=2 -> y_[t-2:t]=[1,2].
n_series (int): number of time-series.
stat_exog_list (list): static exogenous columns.
hist_exog_list (list): historic exogenous columns.
futr_exog_list (list): future exogenous columns.
d_model (int): dimension of the model.
d_ff (int): dimension of the fully-connected network.
dropout (float): dropout rate.
e_layers (int): number of encoder layers.
top_k (int): number of selected frequencies.
decomp_method (str): method of series decomposition [moving_avg, dft_decomp].
moving_avg (int): window size of moving average.
channel_independence (int): 0: channel dependence, 1: channel independence.
down_sampling_layers (int): number of downsampling layers.
down_sampling_window (int): size of downsampling window.
down_sampling_method (str): down sampling method [avg, max, conv].
use_norm (bool): whether to normalize or not.
decoder_input_size_multiplier (float): 0.5.
loss (PyTorch module): instantiated train loss class from losses collection.
valid_loss (PyTorch module): instantiated valid loss class from losses collection.
max_steps (int): maximum number of training steps.
learning_rate (float): Learning rate between (0, 1).
num_lr_decays (int): Number of learning rate decays, evenly distributed across max_steps.
early_stop_patience_steps (int): Number of validation iterations before early stopping.
val_check_steps (int): Number of training steps between every validation loss check.
batch_size (int): number of different series in each batch.
valid_batch_size (int): number of different series in each validation and test batch, if None uses batch_size.
windows_batch_size (int): number of windows to sample in each training batch, default uses all.
inference_windows_batch_size (int): number of windows to sample in each inference batch, -1 uses all.
start_padding_enabled (bool): if True, the model will pad the time series with zeros at the beginning, by input size.
training_data_availability_threshold (Union[float, List[float]]): minimum fraction of valid data points required for training windows. Single float applies to both insample and outsample; list of two floats specifies [insample_fraction, outsample_fraction]. Default 0.0 allows windows with only 1 valid data point (current behavior).
step_size (int): step size between each window of temporal data.
scaler_type (str): type of scaler for temporal inputs normalization see temporal scalers.
random_seed (int): random_seed for pytorch initializer and numpy generators.
drop_last_loader (bool): if True TimeSeriesDataLoader drops last non-full batch.
alias (str): optional, Custom name of the model.
optimizer (Subclass of ‘torch.optim.Optimizer’): optional, user specified optimizer instead of the default choice (Adam).
optimizer_kwargs (dict): optional, list of parameters used by the user specified optimizer.
lr_scheduler (Subclass of ‘torch.optim.lr_scheduler.LRScheduler’): optional, user specified lr_scheduler instead of the default choice (StepLR).
lr_scheduler_kwargs (dict): optional, list of parameters used by the user specified lr_scheduler.
dataloader_kwargs (dict): optional, list of parameters passed into the PyTorch Lightning dataloader by the TimeSeriesDataLoader.
**trainer_kwargs (keyword): trainer arguments inherited from PyTorch Lighning’s trainer.
TimeMixer.fit
fit method, optimizes the neural network’s weights using the
initialization parameters (learning_rate, windows_batch_size, …)
and the loss function as defined during the initialization.
Within fit we use a PyTorch Lightning Trainer that
inherits the initialization’s self.trainer_kwargs, to customize
its inputs, see PL’s trainer arguments.
The method is designed to be compatible with SKLearn-like classes
and in particular to be compatible with the StatsForecast library.
By default the model is not saving training checkpoints to protect
disk memory, to get them change enable_checkpointing=True in __init__.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
dataset | TimeSeriesDataset | NeuralForecast’s TimeSeriesDataset, see documentation. | required |
val_size | int | Validation size for temporal cross-validation. | 0 |
random_seed | int | Random seed for pytorch initializer and numpy generators, overwrites model.init’s. | None |
test_size | int | Test size for temporal cross-validation. | 0 |
| Type | Description |
|---|---|
| None |
TimeMixer.predict
Trainer execution of predict_step.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
dataset | TimeSeriesDataset | NeuralForecast’s TimeSeriesDataset, see documentation. | required |
test_size | int | Test size for temporal cross-validation. | None |
step_size | int | Step size between each window. | 1 |
random_seed | int | Random seed for pytorch initializer and numpy generators, overwrites model.init’s. | None |
quantiles | list | Target quantiles to predict. | None |
h | int | Prediction horizon, if None, uses the model’s fitted horizon. Defaults to None. | None |
explainer_config | dict | configuration for explanations. | None |
**data_module_kwargs | dict | PL’s TimeSeriesDataModule args, see documentation. |
| Type | Description |
|---|---|
| None |
Usage example
cross_validation to forecast multiple historic values.
2. Auxiliary Functions
2.1 Embedding
DataEmbedding_wo_pos
Module
DataEmbedding_wo_pos
DFT_series_decomp
Module
Series decomposition block
2.2 Mixing
PastDecomposableMixing
Module
PastDecomposableMixing
MultiScaleTrendMixing
Module
Top-down mixing trend pattern
MultiScaleSeasonMixing
Module
Bottom-up mixing season pattern
