h | int | forecast horizon. | required |
input_size | int | autorregresive inputs size, y=[1,2,3,4] input_size=2 -> y_[t-2:t]=[1,2]. | required |
stat_exog_list | str list | static exogenous columns. | None |
hist_exog_list | str list | historic exogenous columns. | None |
futr_exog_list | str list | future exogenous columns. | None |
exclude_insample_y | bool | the model skips the autoregressive features y[t-input_size:t] if True. | False |
encoder_layers | int | number of layers for encoder. | 3 |
n_heads | int | number of multi-head’s attention. | 16 |
hidden_size | int | units of embeddings and encoders. | 128 |
linear_hidden_size | int | units of linear layer. | 256 |
dropout | float | dropout rate for residual connection. | 0.2 |
fc_dropout | float | dropout rate for linear layer. | 0.2 |
head_dropout | float | dropout rate for Flatten head layer. | 0.0 |
attn_dropout | float | dropout rate for attention layer. | 0.0 |
patch_len | int | length of patch. Note: patch_len = min(patch_len, input_size + stride). | 16 |
stride | int | stride of patch. | 8 |
revin | bool | bool to use RevIn. | True |
revin_affine | bool | bool to use affine in RevIn. | False |
revin_subtract_last | bool | bool to use substract last in RevIn. | True |
activation | str | activation from [‘gelu’,‘relu’]. | ‘gelu’ |
res_attention | bool | bool to use residual attention. | True |
batch_normalization | bool | bool to use batch normalization. | False |
learn_pos_embed | bool | bool to learn positional embedding. | True |
loss | PyTorch module | instantiated train loss class from losses collection. | MAE() |
valid_loss | PyTorch module | instantiated valid loss class from losses collection. | None |
max_steps | int | maximum number of training steps. | 5000 |
learning_rate | float | learning rate between (0, 1). | 0.0001 |
num_lr_decays | int | number of learning rate decays, evenly distributed across max_steps. | -1 |
early_stop_patience_steps | int | number of validation iterations before early stopping. | -1 |
val_check_steps | int | number of training steps between every validation loss check. | 100 |
batch_size | int | number of different series in each batch. | 32 |
valid_batch_size | int | number of different series in each validation and test batch, if None uses batch_size. | None |
windows_batch_size | int | number of windows to sample in each training batch, default uses all. | 1024 |
inference_windows_batch_size | int | number of windows to sample in each inference batch. | 1024 |
start_padding_enabled | bool | if True, the model will pad the time series with zeros at the beginning, by input size. | False |
training_data_availability_threshold | Union[float, List[float]] | minimum fraction of valid data points required for training windows. Single float applies to both insample and outsample; list of two floats specifies [insample_fraction, outsample_fraction]. Default 0.0 allows windows with only 1 valid data point (current behavior). | 0.0 |
step_size | int | step size between each window of temporal data. | 1 |
scaler_type | str | type of scaler for temporal inputs normalization see temporal scalers. | ‘identity’ |
random_seed | int | random_seed for pytorch initializer and numpy generators. | 1 |
drop_last_loader | bool | if True TimeSeriesDataLoader drops last non-full batch. | False |
alias | str | optional, Custom name of the model. | None |
optimizer | Subclass of ‘torch.optim.Optimizer’ | optional, user specified optimizer instead of the default choice (Adam). | None |
optimizer_kwargs | dict | optional, list of parameters used by the user specified optimizer. | None |
lr_scheduler | Subclass of ‘torch.optim.lr_scheduler.LRScheduler’ | optional, user specified lr_scheduler instead of the default choice (StepLR). | None |
lr_scheduler_kwargs | dict | optional, list of parameters used by the user specified lr_scheduler. | None |
dataloader_kwargs | dict | optional, list of parameters passed into the PyTorch Lightning dataloader by the TimeSeriesDataLoader. | None |
**trainer_kwargs | int | keyword trainer arguments inherited from PyTorch Lighning’s trainer. | |