Automatic Forecasting Models

Automatic forecasting tools optimize the hyperparameters of a given model class and select the best-performing model for a validation set. The optimization methods include grid search, random search, and Bayesian optimization.

MLP-BasedRNN-BasedTransformersCNN-BasedMultivariate
AutoMLPAutoRNNAutoTFTAutoTimesNetAutoStemGNN
AutoNBEATSAutoLSTMAutoInformerAutoHINT
AutoNBEATSxAutoGRUAutoformer
AutoNHITSAutoTCNAutoPatchTST
AutoDeepAR

Optimization Objectives

NeuralForecast is a highly modular framework capable of augmenting a wide variety of robust neural network architectures with different point or probability outputs as defined by their optimization objectives.

Scale-DependentPercentage-ErrorsScale-IndependentRobust
MAEMAPEMASEHuber
MSEsMAPETukey
RMSEHuberMQLoss
Parametric ProbabilitiesNon-Parametric Probabilities
NormalQuantileLoss
StudenTMQLoss
PoissonHuberQLoss
Negative BinomialHuberMQLoss
Tweedie
PMM /GMM

MLP-Based Model Family

The MLP-based family operates like a classic autoencoder. Its initial layers encode raw autoregressive window into a representation, and the decoder produces the desired output based on the horizon, probability output, or point objective. Recent architectures include modifications like residual learning techniques and task-specific changes.

ModelPoint ForecastProbabilistic ForecastInsample fitted valuesProbabilistic fitted values
MLPβœ…βœ…βœ…βœ…
NBEATSβœ…βœ…βœ…βœ…
NBEATSxβœ…βœ…βœ…βœ…
NHITSβœ…βœ…βœ…βœ…

RNN-Based Model Family

The RNN-based family attempts to leverage the data’s temporal structure while reducing MLPs over parametrization. Recurrent networks are dynamic and can handle sequences of varying lengths through a mechanism for updating internal states that considers the entire sequence history. Modern state modifications help diminish vanishing and exploding gradients.

ModelPoint ForecastProbabilistic ForecastInsample fitted valuesProbabilistic fitted values
RNNβœ…βœ…βœ…βœ…
GRUβœ…βœ…βœ…βœ…
LSTMβœ…βœ…βœ…βœ…
TCNβœ…βœ…βœ…βœ…
DeepARβœ…βœ…βœ…βœ…
DilatedRNNβœ…βœ…βœ…βœ…

Transformers Model Family

Transformer architectures are an alternative to recurrent networks. These networks build on the self-attention mechanism that directly allows modeling the relationship between different sequence parts without sequential processing. Attention makes Transformers more parallelizable than RNNs.

ModelPoint ForecastProbabilistic ForecastInsample fitted valuesProbabilistic fitted values
TFTβœ…βœ…βœ…βœ…
Informerβœ…βœ…βœ…βœ…
Autoformerβœ…βœ…βœ…βœ…
PatchTSTβœ…βœ…βœ…βœ…
VanillaTransformerβœ…βœ…βœ…βœ…

CNN-Based Model Family

Convolutional Neural Networks (CNNs), originally celebrated for their accomplishments in image processing and computer vision, have also revealed substantial prowess in time series forecasting. Navigating through temporal data, CNNs utilize their convolutional layers to automatically and adaptively learn temporal patterns from the input data, offering an approach to uncovering subtle, underlying patterns embedded within a series of values.

ModelPoint ForecastProbabilistic ForecastInsample fitted valuesProbabilistic fitted values
TimesNetβœ…βœ…βœ…βœ