FEDformer
The FEDformer model tackles the challenge of finding reliable dependencies on intricate temporal patterns of long-horizon forecasting.
The architecture has the following distinctive features: - In-built progressive decomposition in trend and seasonal components based on a moving average filter. - Frequency Enhanced Block and Frequency Enhanced Attention to perform attention in the sparse representation on basis such as Fourier transform. - Classic encoder-decoder proposed by Vaswani et al. (2017) with a multi-head attention mechanism.
The FEDformer model utilizes a three-component approach to define its embedding: - It employs encoded autoregressive features obtained from a convolution network. - Absolute positional embeddings obtained from calendar features are utilized.
1. Auxiliary functions
source
AutoCorrelationLayer
Auto Correlation Layer
source
LayerNorm
Special designed layernorm for the seasonal part
source
Decoder
FEDformer decoder
source
DecoderLayer
FEDformer decoder layer with the progressive decomposition architecture
source
Encoder
FEDformer encoder
source
EncoderLayer
FEDformer encoder layer with the progressive decomposition architecture
source
FourierCrossAttention
Fourier Cross Attention layer
source
FourierBlock
Fourier block
source
get_frequency_modes
Get modes on frequency domain: ‘random’ for sampling randomly ‘else’ for sampling the lowest modes;
2. Model
source
FEDformer
*FEDformer
The FEDformer model tackles the challenge of finding reliable dependencies on intricate temporal patterns of long-horizon forecasting.
The architecture has the following distinctive features: - In-built progressive decomposition in trend and seasonal components based on a moving average filter. - Frequency Enhanced Block and Frequency Enhanced Attention to perform attention in the sparse representation on basis such as Fourier transform. - Classic encoder-decoder proposed by Vaswani et al. (2017) with a multi-head attention mechanism.
The FEDformer model utilizes a three-component approach to define its embedding: - It employs encoded autoregressive features obtained from a convolution network. - Absolute positional embeddings obtained from calendar features are utilized.
Parameters:
h
: int, forecast horizon.
input_size
: int,
maximum sequence length for truncated train backpropagation. Default -1
uses all history.
futr_exog_list
: str list, future exogenous
columns.
hist_exog_list
: str list, historic exogenous columns.
stat_exog_list
: str list, static exogenous columns.
decoder_input_size_multiplier
: float = 0.5, .
version
: str =
‘Fourier’, version of the model.
modes
: int = 64, number of modes
for the Fourier block.
mode_select
: str = ‘random’, method to
select the modes for the Fourier block.
hidden_size
: int=128,
units of embeddings and encoders.
dropout
: float (0, 1), dropout
throughout Autoformer architecture.
n_head
: int=8, controls number
of multi-head’s attention.
conv_hidden_size
: int=32, channels of
the convolutional encoder.
activation
: str=GELU
, activation from
[‘ReLU’, ‘Softplus’, ‘Tanh’, ‘SELU’, ‘LeakyReLU’, ‘PReLU’, ‘Sigmoid’,
‘GELU’].
encoder_layers
: int=2, number of layers for the TCN
encoder.
decoder_layers
: int=1, number of layers for the MLP
decoder.
MovingAvg_window
: int=25, window size for the moving
average filter.
loss
: PyTorch module, instantiated train loss
class from losses
collection.
valid_loss
: PyTorch module, instantiated validation loss class from
losses
collection.
max_steps
: int=1000, maximum number of training steps.
learning_rate
: float=1e-3, Learning rate between (0, 1).
num_lr_decays
: int=-1, Number of learning rate decays, evenly
distributed across max_steps.
early_stop_patience_steps
: int=-1,
Number of validation iterations before early stopping.
val_check_steps
: int=100, Number of training steps between every
validation loss check.
batch_size
: int=32, number of different
series in each batch.
valid_batch_size
: int=None, number of
different series in each validation and test batch, if None uses
batch_size.
windows_batch_size
: int=1024, number of windows to
sample in each training batch, default uses all.
inference_windows_batch_size
: int=1024, number of windows to sample in
each inference batch.
start_padding_enabled
: bool=False, if True,
the model will pad the time series with zeros at the beginning, by input
size.
scaler_type
: str=‘robust’, type of scaler for temporal
inputs normalization see temporal
scalers.
random_seed
: int=1, random_seed for pytorch initializer and numpy
generators.
num_workers_loader
: int=os.cpu_count(), workers to be
used by TimeSeriesDataLoader
.
drop_last_loader
: bool=False, if
True TimeSeriesDataLoader
drops last non-full batch.
alias
: str,
optional, Custom name of the model.
optimizer
: Subclass of
‘torch.optim.Optimizer’, optional, user specified optimizer instead of
the default choice (Adam).
optimizer_kwargs
: dict, optional, list
of parameters used by the user specified optimizer
.
lr_scheduler
: Subclass of ‘torch.optim.lr_scheduler.LRScheduler’,
optional, user specified lr_scheduler instead of the default choice
(StepLR).
lr_scheduler_kwargs
: dict, optional, list of parameters
used by the user specified lr_scheduler
.
**trainer_kwargs
: int,
keyword trainer arguments inherited from PyTorch Lighning’s
trainer.
*