h | int | forecast horizon. | required |
input_size | int | considered autorregresive inputs (lags), y=[1,2,3,4] input_size=2 -> lags=[1,2]. | required |
hidden_size | int | number of units for the dense MLPs. | 512 |
decoder_output_dim | int | number of units for the output of the decoder. | 32 |
temporal_decoder_dim | int | number of units for the hidden sizeof the temporal decoder. | 128 |
dropout | float | dropout rate between (0, 1) . | 0.3 |
layernorm | bool | if True uses Layer Normalization on the MLP residual block outputs. | True |
num_encoder_layers | int | number of encoder layers. | 1 |
num_decoder_layers | int | number of decoder layers. | 1 |
temporal_width | int | lower temporal projected dimension. | 4 |
futr_exog_list | str list | future exogenous columns. | None |
hist_exog_list | str list | historic exogenous columns. | None |
stat_exog_list | str list | static exogenous columns. | None |
exclude_insample_y | bool | whether to exclude the target variable from the historic exogenous data. | False |
loss | PyTorch module | instantiated train loss class from losses collection. | MAE() |
valid_loss | PyTorch module | instantiated valid loss class from losses collection. | None |
max_steps | int | maximum number of training steps. | 1000 |
learning_rate | float | Learning rate between (0, 1). | 0.001 |
num_lr_decays | int | Number of learning rate decays, evenly distributed across max_steps. | -1 |
early_stop_patience_steps | int | Number of validation iterations before early stopping. | -1 |
val_check_steps | int | Number of training steps between every validation loss check. | 100 |
batch_size | int | number of different series in each batch. | 32 |
valid_batch_size | int | number of different series in each validation and test batch. | None |
windows_batch_size | int | number of windows to sample in each training batch, default uses all. | 1024 |
inference_windows_batch_size | int | number of windows to sample in each inference batch, -1 uses all. | 1024 |
start_padding_enabled | bool | if True, the model will pad the time series with zeros at the beginning, by input size. | False |
training_data_availability_threshold | Union[float, List[float]] | minimum fraction of valid data points required for training windows. Single float applies to both insample and outsample; list of two floats specifies [insample_fraction, outsample_fraction]. Default 0.0 allows windows with only 1 valid data point (current behavior). | 0.0 |
step_size | int | step size between each window of temporal data. | 1 |
scaler_type | str | type of scaler for temporal inputs normalization see temporal scalers. | ‘identity’ |
random_seed | int | random_seed for pytorch initializer and numpy generators. | 1 |
drop_last_loader | bool | if True TimeSeriesDataLoader drops last non-full batch. | False |
alias | str | optional, Custom name of the model. | None |
optimizer | Subclass of ‘torch.optim.Optimizer’ | optional, user specified optimizer instead of the default choice (Adam). | None |
optimizer_kwargs | dict | optional, list of parameters used by the user specified optimizer. | None |
lr_scheduler | Subclass of ‘torch.optim.lr_scheduler.LRScheduler’ | optional, user specified lr_scheduler instead of the default choice (StepLR). | None |
lr_scheduler_kwargs | dict | optional, list of parameters used by the user specified lr_scheduler. | None |
dataloader_kwargs | dict | optional, list of parameters passed into the PyTorch Lightning dataloader by the TimeSeriesDataLoader. | None |
**trainer_kwargs | int | keyword trainer arguments inherited from PyTorch Lighning’s trainer. | |