h | int | forecast horizon. | required |
input_size | int | maximum sequence length for truncated train backpropagation. Default -1 uses 3 * horizon. | -1 |
inference_input_size | int | maximum sequence length for truncated inference. Default None uses input_size history. | None |
h_train | int | maximum sequence length for truncated train backpropagation. Default 1. | 1 |
encoder_n_layers | int | number of layers for the RNN. | 2 |
encoder_hidden_size | int | units for the RNN’s hidden state size. | 128 |
encoder_activation | str | type of RNN activation from tanh or relu. | ‘tanh’ |
encoder_bias | bool | whether or not to use biases b_ih, b_hh within RNN units. | True |
encoder_dropout | float | dropout regularization applied to RNN outputs. | 0.0 |
decoder_hidden_size | int | size of hidden layer for the MLP decoder. | 128 |
decoder_layers | int | number of layers for the MLP decoder. | 2 |
futr_exog_list | str list | future exogenous columns. | None |
hist_exog_list | str list | historic exogenous columns. | None |
stat_exog_list | str list | static exogenous columns. | None |
exclude_insample_y | bool | whether to exclude the target variable from the historic exogenous data. | False |
recurrent | bool | whether to produce forecasts recursively (True) or direct (False). | False |
loss | PyTorch module | instantiated train loss class from losses collection. | MAE() |
valid_loss | PyTorch module | instantiated valid loss class from losses collection. | None |
max_steps | int | maximum number of training steps. | 1000 |
learning_rate | float | Learning rate between (0, 1). | 0.001 |
num_lr_decays | int | Number of learning rate decays, evenly distributed across max_steps. | -1 |
early_stop_patience_steps | int | Number of validation iterations before early stopping. | -1 |
val_check_steps | int | Number of training steps between every validation loss check. | 100 |
batch_size | int | number of differentseries in each batch. | 32 |
valid_batch_size | int | number of different series in each validation and test batch. | None |
windows_batch_size | int | number of windows to sample in each training batch, default uses all. | 128 |
inference_windows_batch_size | int | number of windows to sample in each inference batch, -1 uses all. | 1024 |
start_padding_enabled | bool | if True, the model will pad the time series with zeros at the beginning, by input size. | False |
training_data_availability_threshold | Union[float, List[float]] | minimum fraction of valid data points required for training windows. Single float applies to both insample and outsample; list of two floats specifies [insample_fraction, outsample_fraction]. Default 0.0 allows windows with only 1 valid data point (current behavior). | 0.0 |
step_size | int | step size between each window of temporal data. | 1 |
scaler_type | str | type of scaler for temporal inputs normalization see temporal scalers. | ‘robust’ |
random_seed | int | random_seed for pytorch initializer and numpy generators. | 1 |
drop_last_loader | bool | if True TimeSeriesDataLoader drops last non-full batch. | False |
alias | str | optional, Custom name of the model. | None |
optimizer | Subclass of ‘torch.optim.Optimizer’ | optional, user specified optimizer instead of the default choice (Adam). | None |
optimizer_kwargs | dict | optional, list of parameters used by the user specified optimizer. | None |
lr_scheduler | Subclass of ‘torch.optim.lr_scheduler.LRScheduler’ | optional, user specified lr_scheduler instead of the default choice (StepLR). | None |
lr_scheduler_kwargs | dict | optional, list of parameters used by the user specified lr_scheduler. | None |
dataloader_kwargs | dict | optional, list of parameters passed into the PyTorch Lightning dataloader by the TimeSeriesDataLoader. | None |
**trainer_kwargs | int | keyword trainer arguments inherited from PyTorch Lighning’s trainer. | |