h | int | Forecast horizon. | required |
input_size | int | Length of input window (lags). | required |
stat_exog_list | list of str | optional (default=None), Static exogenous columns. | None |
hist_exog_list | list of str | optional (default=None), Historic exogenous columns. | None |
futr_exog_list | list of str | optional (default=None), Future exogenous columns. | None |
exclude_insample_y | bool | The model skips the autoregressive features y[t-input_size:t] if True. | False |
hidden_size | int | Size of embedding for embedding and encoders. | 64 |
dropout | float | Dropout for embeddings. | 0.1 |
conv_hidden_size | int | Channels of the Inception block. | 64 |
top_k | int | Number of periods. | 5 |
num_kernels | int | Number of kernels for the Inception block. | 6 |
encoder_layers | int | Number of encoder layers. | 2 |
loss | PyTorch module | Instantiated train loss class from losses collection. | MAE() |
valid_loss | PyTorch module | Instantiated validation loss class from losses collection. | None |
max_steps | int | Maximum number of training steps. | 1000 |
learning_rate | float | Learning rate. | 0.0001 |
num_lr_decays | int | Number of learning rate decays, evenly distributed across max_steps. If -1, no learning rate decay is performed. | -1 |
early_stop_patience_steps | int | Number of validation iterations before early stopping. If -1, no early stopping is performed. | -1 |
val_check_steps | int | Number of training steps between every validation loss check. | 100 |
batch_size | int | Number of different series in each batch. | 32 |
valid_batch_size | int | Number of different series in each validation and test batch, if None uses batch_size. | None |
windows_batch_size | int | Number of windows to sample in each training batch. | 64 |
inference_windows_batch_size | int | Number of windows to sample in each inference batch. | 256 |
start_padding_enabled | bool | If True, the model will pad the time series with zeros at the beginning by input size. | False |
training_data_availability_threshold | Union[float, List[float]] | minimum fraction of valid data points required for training windows. Single float applies to both insample and outsample; list of two floats specifies [insample_fraction, outsample_fraction]. Default 0.0 allows windows with only 1 valid data point (current behavior). | 0.0 |
step_size | int | Step size between each window of temporal data. | 1 |
scaler_type | str | Type of scaler for temporal inputs normalization see temporal scalers. | ‘standard’ |
random_seed | int | Random_seed for pytorch initializer and numpy generators. | 1 |
drop_last_loader | bool | If True TimeSeriesDataLoader drops last non-full batch. | False |
alias | str | optional (default=None), Custom name of the model. | None |
optimizer | Subclass of ‘torch.optim.Optimizer’ | optional (default=None), User specified optimizer instead of the default choice (Adam). | None |
optimizer_kwargs | dict | optional (defualt=None), List of parameters used by the user specified optimizer. | None |
lr_scheduler | Subclass of ‘torch.optim.lr_scheduler.LRScheduler’ | optional, user specified lr_scheduler instead of the default choice (StepLR). | None |
lr_scheduler_kwargs | dict | optional, list of parameters used by the user specified lr_scheduler. | None |
dataloader_kwargs | dict | optional (default=None), List of parameters passed into the PyTorch Lightning dataloader by the TimeSeriesDataLoader. | None |
**trainer_kwargs | int | keyword trainer arguments inherited from PyTorch Lighning’s trainer. | |