h | int | forecast horizon. | required |
input_size | int | considered autorregresive inputs (lags), y=[1,2,3,4] input_size=2 -> lags=[1,2]. | required |
hidden_size | int | units for the TCN’s hidden state size. Default: 16. | 16 |
dropout | float | dropout rate used for the dropout layers throughout the architecture. Default: 0.1. | 0.5 |
futr_exog_list | list | future exogenous columns. | None |
hist_exog_list | list | historic exogenous columns. | None |
stat_exog_list | list | static exogenous columns. | None |
exclude_insample_y | bool | the model skips the autoregressive features y[t-input_size:t] if True. Default: False. | False |
loss | Module | PyTorch module, instantiated train loss class from losses collection. | MAE() |
valid_loss | Module | PyTorch module, instantiated valid loss class from losses collection. | None |
max_steps | int | maximum number of training steps. Default: 1000. | 1000 |
learning_rate | float | Learning rate between (0, 1). Default: 1e-3. | 0.001 |
num_lr_decays | int | Number of learning rate decays, evenly distributed across max_steps. Default: -1. | -1 |
early_stop_patience_steps | int | Number of validation iterations before early stopping. Default: -1. | -1 |
val_monitor | str | metric to monitor for early stopping. Valid options: “ptl/val_loss”, “valid_loss”, “train_loss”. Default: “ptl/val_loss”. | ‘ptl/val_loss’ |
val_check_steps | int | Number of training steps between every validation loss check. Default: 100. | 100 |
batch_size | int | number of different series in each batch. Default: 32. | 32 |
valid_batch_size | int | number of different series in each validation and test batch, if None uses batch_size. Default: None. | None |
windows_batch_size | int | number of windows to sample in each training batch, default uses all. Default: 1024. | 1024 |
inference_windows_batch_size | int | number of windows to sample in each inference batch, -1 uses all. Default: 1024. | 1024 |
start_padding_enabled | bool | if True, the model will pad the time series with zeros at the beginning, by input size. Default: False. | False |
training_data_availability_threshold | Union[float, List[float]] | minimum fraction of valid data points required for training windows. Single float applies to both insample and outsample; list of two floats specifies [insample_fraction, outsample_fraction]. Default 0.0 allows windows with only 1 valid data point (current behavior). Default: 0.0. | 0.0 |
step_size | int | step size between each window of temporal data. Default: 1. | 1 |
scaler_type | str | type of scaler for temporal inputs normalization see temporal scalers. Default: ‘identity’. | ‘identity’ |
random_seed | int | random_seed for pytorch initializer and numpy generators. Default: 1. | 1 |
drop_last_loader | bool | if True TimeSeriesDataLoader drops last non-full batch. Default: False. | False |
alias | str | optional, Custom name of the model. Default: None. | None |
optimizer | Subclass of ‘torch.optim.Optimizer’ | optional, user specified optimizer instead of the default choice (Adam). | None |
optimizer_kwargs | dict | optional, list of parameters used by the user specified optimizer. | None |
lr_scheduler | Subclass of ‘torch.optim.lr_scheduler.LRScheduler’ | optional, user specified lr_scheduler instead of the default choice (StepLR). | None |
lr_scheduler_kwargs | dict | optional, list of parameters used by the user specified lr_scheduler. | None |
dataloader_kwargs | dict | optional, list of parameters passed into the PyTorch Lightning dataloader by the TimeSeriesDataLoader. | None |
**trainer_kwargs | int | keyword trainer arguments inherited from PyTorch Lighning’s trainer. | |