Skip to main content
Bidirectional Temporal Convolutional Network (BiTCN) is a forecasting architecture based on two temporal convolutional networks (TCNs). The first network (‘forward’) encodes future covariates of the time series, whereas the second network (‘backward’) encodes past observations and covariates. This method allows to preserve the temporal information of sequence data, and is computationally more efficient than common RNN methods (LSTM, GRU, …). As compared to Transformer-based methods, BiTCN has a lower space complexity, i.e. it requires orders of magnitude less parameters. This model may be a good choice if you seek a small model (small amount of trainable parameters) with few hyperparameters to tune (only 2). References Figure 1. Visualization of a stack of dilated causal convolutional layers. Figure 1. Visualization of a stack of dilated causal convolutional layers.

1. BiTCN

BiTCN

BiTCN(
    h,
    input_size,
    hidden_size=16,
    dropout=0.5,
    futr_exog_list=None,
    hist_exog_list=None,
    stat_exog_list=None,
    exclude_insample_y=False,
    loss=MAE(),
    valid_loss=None,
    max_steps=1000,
    learning_rate=0.001,
    num_lr_decays=-1,
    early_stop_patience_steps=-1,
    val_check_steps=100,
    batch_size=32,
    valid_batch_size=None,
    windows_batch_size=1024,
    inference_windows_batch_size=1024,
    start_padding_enabled=False,
    training_data_availability_threshold=0.0,
    step_size=1,
    scaler_type="identity",
    random_seed=1,
    drop_last_loader=False,
    alias=None,
    optimizer=None,
    optimizer_kwargs=None,
    lr_scheduler=None,
    lr_scheduler_kwargs=None,
    dataloader_kwargs=None,
    **trainer_kwargs
)
Bases: BaseModel BiTCN Bidirectional Temporal Convolutional Network (BiTCN) is a forecasting architecture based on two temporal convolutional networks (TCNs). The first network (‘forward’) encodes future covariates of the time series, whereas the second network (‘backward’) encodes past observations and covariates. This is a univariate model. Parameters:
NameTypeDescriptionDefault
hintforecast horizon.required
input_sizeintconsidered autorregresive inputs (lags), y=[1,2,3,4] input_size=2 -> lags=[1,2].required
hidden_sizeintunits for the TCN’s hidden state size. Default: 16.16
dropoutfloatdropout rate used for the dropout layers throughout the architecture. Default: 0.1.0.5
futr_exog_listlistfuture exogenous columns.None
hist_exog_listlisthistoric exogenous columns.None
stat_exog_listliststatic exogenous columns.None
exclude_insample_yboolthe model skips the autoregressive features y[t-input_size:t] if True. Default: False.False
lossModulePyTorch module, instantiated train loss class from losses collection.MAE()
valid_lossModulePyTorch module, instantiated valid loss class from losses collection.None
max_stepsintmaximum number of training steps. Default: 1000.1000
learning_ratefloatLearning rate between (0, 1). Default: 1e-3.0.001
num_lr_decaysintNumber of learning rate decays, evenly distributed across max_steps. Default: -1.-1
early_stop_patience_stepsintNumber of validation iterations before early stopping. Default: -1.-1
val_check_stepsintNumber of training steps between every validation loss check. Default: 100.100
batch_sizeintnumber of different series in each batch. Default: 32.32
valid_batch_sizeintnumber of different series in each validation and test batch, if None uses batch_size. Default: None.None
windows_batch_sizeintnumber of windows to sample in each training batch, default uses all. Default: 1024.1024
inference_windows_batch_sizeintnumber of windows to sample in each inference batch, -1 uses all. Default: 1024.1024
start_padding_enabledboolif True, the model will pad the time series with zeros at the beginning, by input size. Default: False.False
training_data_availability_thresholdUnion[float, List[float]]minimum fraction of valid data points required for training windows. Single float applies to both insample and outsample; list of two floats specifies [insample_fraction, outsample_fraction]. Default 0.0 allows windows with only 1 valid data point (current behavior). Default: 0.0.0.0
step_sizeintstep size between each window of temporal data. Default: 1.1
scaler_typestrtype of scaler for temporal inputs normalization see temporal scalers. Default: ‘identity’.‘identity’
random_seedintrandom_seed for pytorch initializer and numpy generators. Default: 1.1
drop_last_loaderboolif True TimeSeriesDataLoader drops last non-full batch. Default: False.False
aliasstroptional, Custom name of the model. Default: None.None
optimizerSubclass of ‘torch.optim.Optimizer’optional, user specified optimizer instead of the default choice (Adam).None
optimizer_kwargsdictoptional, list of parameters used by the user specified optimizer.None
lr_schedulerSubclass of ‘torch.optim.lr_scheduler.LRScheduler’optional, user specified lr_scheduler instead of the default choice (StepLR).None
lr_scheduler_kwargsdictoptional, list of parameters used by the user specified lr_scheduler.None
dataloader_kwargsdictoptional, list of parameters passed into the PyTorch Lightning dataloader by the TimeSeriesDataLoader.None
**trainer_kwargsintkeyword trainer arguments inherited from PyTorch Lighning’s trainer.

BiTCN.fit

fit(
    dataset, val_size=0, test_size=0, random_seed=None, distributed_config=None
)
Fit. The fit method, optimizes the neural network’s weights using the initialization parameters (learning_rate, windows_batch_size, …) and the loss function as defined during the initialization. Within fit we use a PyTorch Lightning Trainer that inherits the initialization’s self.trainer_kwargs, to customize its inputs, see PL’s trainer arguments. The method is designed to be compatible with SKLearn-like classes and in particular to be compatible with the StatsForecast library. By default the model is not saving training checkpoints to protect disk memory, to get them change enable_checkpointing=True in __init__. Parameters:
NameTypeDescriptionDefault
datasetTimeSeriesDatasetNeuralForecast’s TimeSeriesDataset, see documentation.required
val_sizeintValidation size for temporal cross-validation.0
random_seedintRandom seed for pytorch initializer and numpy generators, overwrites model.init’s.None
test_sizeintTest size for temporal cross-validation.0
Returns:
TypeDescription
None

BiTCN.predict

predict(
    dataset,
    test_size=None,
    step_size=1,
    random_seed=None,
    quantiles=None,
    h=None,
    explainer_config=None,
    **data_module_kwargs
)
Predict. Neural network prediction with PL’s Trainer execution of predict_step. Parameters:
NameTypeDescriptionDefault
datasetTimeSeriesDatasetNeuralForecast’s TimeSeriesDataset, see documentation.required
test_sizeintTest size for temporal cross-validation.None
step_sizeintStep size between each window.1
random_seedintRandom seed for pytorch initializer and numpy generators, overwrites model.init’s.None
quantileslistTarget quantiles to predict.None
hintPrediction horizon, if None, uses the model’s fitted horizon. Defaults to None.None
explainer_configdictconfiguration for explanations.None
**data_module_kwargsdictPL’s TimeSeriesDataModule args, see documentation.
Returns:
TypeDescription
None

Usage Example

import pandas as pd
import matplotlib.pyplot as plt

from neuralforecast import NeuralForecast
from neuralforecast.losses.pytorch import GMM
from neuralforecast.models import BiTCN
from neuralforecast.utils import AirPassengersPanel, AirPassengersStatic

Y_train_df = AirPassengersPanel[AirPassengersPanel.ds<AirPassengersPanel['ds'].values[-12]] # 132 train
Y_test_df = AirPassengersPanel[AirPassengersPanel.ds>=AirPassengersPanel['ds'].values[-12]].reset_index(drop=True) # 12 test

fcst = NeuralForecast(
    models=[
            BiTCN(h=12,
                input_size=24,
                loss=GMM(n_components=7, level=[80,90]),
                max_steps=100,
                scaler_type='standard',
                futr_exog_list=['y_[lag12]'],
                hist_exog_list=None,
                stat_exog_list=['airline1'],
                windows_batch_size=2048,
                val_check_steps=10,
                early_stop_patience_steps=-1,
                ),     
    ],
    freq='ME'
)
fcst.fit(df=Y_train_df, static_df=AirPassengersStatic)
forecasts = fcst.predict(futr_df=Y_test_df)

# Plot quantile predictions
Y_hat_df = forecasts.reset_index(drop=False).drop(columns=['unique_id','ds'])
plot_df = pd.concat([Y_test_df, Y_hat_df], axis=1)
plot_df = pd.concat([Y_train_df, plot_df])

plot_df = plot_df[plot_df.unique_id=='Airline1'].drop('unique_id', axis=1)
plt.plot(plot_df['ds'], plot_df['y'], c='black', label='True')
plt.plot(plot_df['ds'], plot_df['BiTCN-median'], c='blue', label='median')
plt.fill_between(x=plot_df['ds'][-12:], 
                 y1=plot_df['BiTCN-lo-90'][-12:].values,
                 y2=plot_df['BiTCN-hi-90'][-12:].values,
                 alpha=0.4, label='level 90')
plt.legend()
plt.grid()

2. Auxilary functions

TCNCell

TCNCell(
    in_channels,
    out_channels,
    kernel_size,
    padding,
    dilation,
    mode,
    groups,
    dropout,
)
Bases: Module Temporal Convolutional Network Cell, consisting of CustomConv1D modules.

CustomConv1d

CustomConv1d(
    in_channels,
    out_channels,
    kernel_size,
    padding=0,
    dilation=1,
    mode="backward",
    groups=1,
)
Bases: Module Forward- and backward looking Conv1D