The Spectral Temporal Graph Neural Network (StemGNN) is a Graph-based multivariate time-series forecasting model. StemGNN jointly learns temporal dependencies and inter-series correlations in the spectral domain, by combining Graph Fourier Transform (GFT) and Discrete Fourier Transform (DFT).

This method proved state-of-the-art performance on geo-temporal datasets such as Solar, METR-LA, and PEMS-BAY, and

References
-Defu Cao, Yujing Wang, Juanyong Duan, Ce Zhang, Xia Zhu, Congrui Huang, Yunhai Tong, Bixiong Xu, Jing Bai, Jie Tong, Qi Zhang (2020). “Spectral Temporal Graph Neural Network for Multivariate Time-series Forecasting”.


source

GLU

 GLU (input_channel, output_channel)

GLU


source

StockBlockLayer

 StockBlockLayer (time_step, unit, multi_layer, stack_cnt=0)

StockBlockLayer


source

StemGNN

 StemGNN (h, input_size, n_series, futr_exog_list=None,
          hist_exog_list=None, stat_exog_list=None, n_stacks=2,
          multi_layer:int=5, dropout_rate:float=0.5, leaky_rate:float=0.2,
          loss=MAE(), valid_loss=None, max_steps:int=1000,
          learning_rate:float=0.001, num_lr_decays:int=3,
          early_stop_patience_steps:int=-1, val_check_steps:int=100,
          batch_size:int=32, step_size:int=1, scaler_type:str='robust',
          random_seed:int=1, num_workers_loader=0, drop_last_loader=False,
          optimizer=None, optimizer_kwargs=None, lr_scheduler=None,
          lr_scheduler_kwargs=None, **trainer_kwargs)

*StemGNN

The Spectral Temporal Graph Neural Network (StemGNN) is a Graph-based multivariate time-series forecasting model. StemGNN jointly learns temporal dependencies and inter-series correlations in the spectral domain, by combining Graph Fourier Transform (GFT) and Discrete Fourier Transform (DFT).

Parameters:
h: int, Forecast horizon.
input_size: int, autorregresive inputs size, y=[1,2,3,4] input_size=2 -> y_[t-2:t]=[1,2].
n_series: int, number of time-series.
stat_exog_list: str list, static exogenous columns.
hist_exog_list: str list, historic exogenous columns.
futr_exog_list: str list, future exogenous columns.
n_stacks: int=2, number of stacks in the model.
multi_layer: int=5, multiplier for FC hidden size on StemGNN blocks.
dropout_rate: float=0.5, dropout rate.
leaky_rate: float=0.2, alpha for LeakyReLU layer on Latent Correlation layer.
loss: PyTorch module, instantiated train loss class from losses collection.
valid_loss: PyTorch module=loss, instantiated valid loss class from losses collection.
max_steps: int=1000, maximum number of training steps.
learning_rate: float=1e-3, Learning rate between (0, 1).
num_lr_decays: int=-1, Number of learning rate decays, evenly distributed across max_steps.
early_stop_patience_steps: int=-1, Number of validation iterations before early stopping.
val_check_steps: int=100, Number of training steps between every validation loss check.
batch_size: int, number of windows in each batch.
step_size: int=1, step size between each window of temporal data.
scaler_type: str=‘robust’, type of scaler for temporal inputs normalization see temporal scalers.
random_seed: int, random_seed for pytorch initializer and numpy generators.
num_workers_loader: int=os.cpu_count(), workers to be used by TimeSeriesDataLoader.
drop_last_loader: bool=False, if True TimeSeriesDataLoader drops last non-full batch.
alias: str, optional, Custom name of the model.
optimizer: Subclass of ‘torch.optim.Optimizer’, optional, user specified optimizer instead of the default choice (Adam).
optimizer_kwargs: dict, optional, list of parameters used by the user specified optimizer.
lr_scheduler: Subclass of ‘torch.optim.lr_scheduler.LRScheduler’, optional, user specified lr_scheduler instead of the default choice (StepLR).
lr_scheduler_kwargs: dict, optional, list of parameters used by the user specified lr_scheduler.
**trainer_kwargs: int, keyword trainer arguments inherited from PyTorch Lighning’s trainer.
*


StemGNN.fit

 StemGNN.fit (dataset, val_size=0, test_size=0, random_seed=None,
              distributed_config=None)

*Fit.

The fit method, optimizes the neural network’s weights using the initialization parameters (learning_rate, windows_batch_size, …) and the loss function as defined during the initialization. Within fit we use a PyTorch Lightning Trainer that inherits the initialization’s self.trainer_kwargs, to customize its inputs, see PL’s trainer arguments.

The method is designed to be compatible with SKLearn-like classes and in particular to be compatible with the StatsForecast library.

By default the model is not saving training checkpoints to protect disk memory, to get them change enable_checkpointing=True in __init__.

Parameters:
dataset: NeuralForecast’s TimeSeriesDataset, see documentation.
val_size: int, validation size for temporal cross-validation.
test_size: int, test size for temporal cross-validation.
*


StemGNN.predict

 StemGNN.predict (dataset, test_size=None, step_size=1, random_seed=None,
                  **data_module_kwargs)

*Predict.

Neural network prediction with PL’s Trainer execution of predict_step.

Parameters:
dataset: NeuralForecast’s TimeSeriesDataset, see documentation.
test_size: int=None, test size for temporal cross-validation.
step_size: int=1, Step size between each window.
**data_module_kwargs: PL’s TimeSeriesDataModule args, see documentation.*

Usage Examples

Train model and forecast future values with predict method.

import pandas as pd
import matplotlib.pyplot as plt

from neuralforecast import NeuralForecast
from neuralforecast.models import StemGNN
from neuralforecast.utils import AirPassengersPanel, AirPassengersStatic
from neuralforecast.losses.pytorch import MAE

Y_train_df = AirPassengersPanel[AirPassengersPanel.ds<AirPassengersPanel['ds'].values[-12]].reset_index(drop=True) # 132 train
Y_test_df = AirPassengersPanel[AirPassengersPanel.ds>=AirPassengersPanel['ds'].values[-12]].reset_index(drop=True) # 12 test

model = StemGNN(h=12,
                input_size=24,
                n_series=2,
                scaler_type='robust',
                max_steps=100,
                early_stop_patience_steps=-1,
                val_check_steps=10,
                learning_rate=1e-3,
                loss=MAE(),
                valid_loss=None,
                batch_size=32
                )

fcst = NeuralForecast(models=[model], freq='M')
fcst.fit(df=Y_train_df, static_df=AirPassengersStatic, val_size=12)
forecasts = fcst.predict(futr_df=Y_test_df)

# Plot predictions
fig, ax = plt.subplots(1, 1, figsize = (20, 7))
Y_hat_df = forecasts.reset_index(drop=False).drop(columns=['unique_id','ds'])
plot_df = pd.concat([Y_test_df, Y_hat_df], axis=1)
plot_df = pd.concat([Y_train_df, plot_df])

plot_df = plot_df[plot_df.unique_id=='Airline1'].drop('unique_id', axis=1)
plt.plot(plot_df['ds'], plot_df['y'], c='black', label='True')
plt.plot(plot_df['ds'], plot_df['StemGNN'], c='blue', label='Forecast')
ax.set_title('AirPassengers Forecast', fontsize=22)
ax.set_ylabel('Monthly Passengers', fontsize=20)
ax.set_xlabel('Year', fontsize=20)
ax.legend(prop={'size': 15})
ax.grid()

Using cross_validation to forecast multiple historic values.

fcst = NeuralForecast(models=[model], freq='M')
forecasts = fcst.cross_validation(df=AirPassengersPanel, static_df=AirPassengersStatic, n_windows=2, step_size=12)

# Plot predictions
fig, ax = plt.subplots(1, 1, figsize = (20, 7))
Y_hat_df = forecasts.loc['Airline1']
Y_df = AirPassengersPanel[AirPassengersPanel['unique_id']=='Airline1']

plt.plot(Y_df['ds'], Y_df['y'], c='black', label='True')
plt.plot(Y_hat_df['ds'], Y_hat_df['StemGNN'], c='blue', label='Forecast')
ax.set_title('AirPassengers Forecast', fontsize=22)
ax.set_ylabel('Monthly Passengers', fontsize=20)
ax.set_xlabel('Year', fontsize=20)
ax.legend(prop={'size': 15})
ax.grid()