In many applications, a set of time series is hierarchically organized. Examples include the presence of geographic levels, products, or categories that define different types of aggregations. In such scenarios, forecasters are often required to provide predictions for all disaggregate and aggregate series. A natural desire is for those predictions to be “coherent”, that is, for the bottom series to add up precisely to the forecasts of the aggregated series.

In this notebook we present an example on how to use HierarchicalForecast to produce coherent forecasts between both geographical levels and temporal levels. We will use the classic Australian Domestic Tourism (Tourism) dataset, which contains monthly time series of the number of visitors to each state of Australia.

We will first load the Tourism data and produce base forecasts using an AutoETS model from StatsForecast. Then, we reconciliate the forecasts with several reconciliation algorithms from HierarchicalForecast according to the cross-sectional geographical hierarchies. Finally, we reconciliate the forecasts in the temporal dimension according to a temporal hierarchy.

You can run these experiments using CPU or GPU with Google Colab.

!pip install hierarchicalforecast statsforecast

1. Load and Process Data

In this example we will use the Tourism dataset from the Forecasting: Principles and Practice book.

The dataset only contains the time series at the lowest level, so we need to create the time series for all hierarchies.

import numpy as np
import pandas as pd
Y_df = pd.read_csv('https://raw.githubusercontent.com/Nixtla/transfer-learning-time-series/main/datasets/tourism.csv')
Y_df = Y_df.rename({'Trips': 'y', 'Quarter': 'ds'}, axis=1)
Y_df.insert(0, 'Country', 'Australia')
Y_df = Y_df[['Country', 'Region', 'State', 'Purpose', 'ds', 'y']]
Y_df['ds'] = Y_df['ds'].str.replace(r'(\d+) (Q\d)', r'\1-\2', regex=True)
Y_df['ds'] = pd.PeriodIndex(Y_df["ds"], freq='Q').to_timestamp()
Y_df.head()
CountryRegionStatePurposedsy
0AustraliaAdelaideSouth AustraliaBusiness1998-01-01135.077690
1AustraliaAdelaideSouth AustraliaBusiness1998-04-01109.987316
2AustraliaAdelaideSouth AustraliaBusiness1998-07-01166.034687
3AustraliaAdelaideSouth AustraliaBusiness1998-10-01127.160464
4AustraliaAdelaideSouth AustraliaBusiness1999-01-01137.448533

2. Cross-sectional reconciliation

2a. Aggregating the dataset according to cross-sectional hierarchy

The dataset can be grouped in the following non-strictly hierarchical structure.

spec = [
    ['Country'],
    ['Country', 'State'], 
    ['Country', 'Purpose'], 
    ['Country', 'State', 'Region'], 
    ['Country', 'State', 'Purpose'], 
    ['Country', 'State', 'Region', 'Purpose']
]

Using the aggregate function from HierarchicalForecast we can get the full set of time series.

from hierarchicalforecast.utils import aggregate
Y_df_cs, S_df_cs, tags_cs = aggregate(Y_df, spec)
Y_df_cs
unique_iddsy
0Australia1998-01-0123182.197269
1Australia1998-04-0120323.380067
2Australia1998-07-0119826.640511
3Australia1998-10-0120830.129891
4Australia1999-01-0122087.353380
33995Australia/Western Australia/Experience Perth/V…2016-10-01439.699451
33996Australia/Western Australia/Experience Perth/V…2017-01-01356.867038
33997Australia/Western Australia/Experience Perth/V…2017-04-01302.296119
33998Australia/Western Australia/Experience Perth/V…2017-07-01373.442070
33999Australia/Western Australia/Experience Perth/V…2017-10-01455.316702
S_df_cs.iloc[:5, :5]
unique_idAustralia/ACT/Canberra/BusinessAustralia/ACT/Canberra/HolidayAustralia/ACT/Canberra/OtherAustralia/ACT/Canberra/Visiting
0Australia1.01.01.01.0
1Australia/ACT1.01.01.01.0
2Australia/New South Wales0.00.00.00.0
3Australia/Northern Territory0.00.00.00.0
4Australia/Queensland0.00.00.00.0

2b. Split Train/Test sets

We use the final two years (8 quarters) as test set. Consequently, our forecast horizon=8.

horizon = 8
Y_test_df_cs = Y_df_cs.groupby("unique_id", as_index=False).tail(horizon)
Y_train_df_cs = Y_df_cs.drop(Y_test_df_cs.index)

2c. Computing base forecasts

The following cell computes the base forecasts for each time series in Y_df using the AutoETS model. Observe that Y_hat_df contains the forecasts but they are not coherent.

from statsforecast.models import AutoETS
from statsforecast.core import StatsForecast
fcst = StatsForecast(models=[AutoETS(season_length=4, model='ZZA')], 
                     freq='QS', n_jobs=-1)
Y_hat_df_cs = fcst.forecast(df=Y_train_df_cs, h=horizon, fitted=True)
Y_fitted_df_cs = fcst.forecast_fitted_values()

2d. Reconcile forecasts

The following cell makes the previous forecasts coherent using the HierarchicalReconciliation class. Since the hierarchy structure is not strict, we can’t use methods such as TopDown or MiddleOut. In this example we use BottomUp and MinTrace.

from hierarchicalforecast.methods import BottomUp, MinTrace
from hierarchicalforecast.core import HierarchicalReconciliation
reconcilers = [
    BottomUp(),
    MinTrace(method='mint_shrink'),
    MinTrace(method='ols')
]
hrec = HierarchicalReconciliation(reconcilers=reconcilers)
Y_rec_df_cs = hrec.reconcile(Y_hat_df=Y_hat_df_cs, Y_df=Y_fitted_df_cs, S=S_df_cs, tags=tags_cs)

The dataframe Y_rec_df contains the reconciled forecasts.

Y_rec_df_cs.head()
unique_iddsAutoETSAutoETS/BottomUpAutoETS/MinTrace_method-mint_shrinkAutoETS/MinTrace_method-ols
0Australia2016-01-0125990.06800424381.91173725428.08978325894.399067
1Australia2016-04-0124458.49028222903.89596423914.27140024357.301898
2Australia2016-07-0123974.05598422412.26573923428.46239423865.910647
3Australia2016-10-0124563.45449523127.34957824089.84595524470.782393
4Australia2017-01-0125990.06800424518.11800625545.35867825901.362283

3. Temporal reconciliation

Next, we aim to reconcile our forecasts also in the temporal domain.

3a. Aggregating the dataset according to temporal hierarchy

We first define the temporal aggregation spec. The spec is a dictionary in which the keys are the name of the aggregation and the value is the amount of bottom-level timesteps that should be aggregated in that aggregation. For example, year consists of 12 months, so we define a key, value pair "yearly":12. We can do something similar for other aggregations that we are interested in.

In this example, we choose a temporal aggregation of year, semiannual and quarter. The bottom level timesteps have a quarterly frequency.

spec_temporal = {"year": 4, "semiannual": 2, "quarter": 1}

We next compute the temporally aggregated train- and test sets using the aggregate_temporal function. Note that we have different aggregation matrices S for the train- and test set, as the test set contains temporal hierarchies that are not included in the train set.

from hierarchicalforecast.utils import aggregate_temporal
Y_train_df_te, S_train_df_te, tags_te_train = aggregate_temporal(df=Y_train_df_cs, spec=spec_temporal)
Y_test_df_te, S_test_df_te, tags_te_test = aggregate_temporal(df=Y_test_df_cs, spec=spec_temporal)
S_train_df_te.iloc[:5, :5]
temporal_idquarter-1quarter-2quarter-3quarter-4
0year-11.01.01.01.0
1year-20.00.00.00.0
2year-30.00.00.00.0
3year-40.00.00.00.0
4year-50.00.00.00.0
S_test_df_te.iloc[:5, :5]
temporal_idquarter-1quarter-2quarter-3quarter-4
0year-11.01.01.01.0
1year-20.00.00.00.0
2semiannual-11.01.00.00.0
3semiannual-20.00.01.01.0
4semiannual-30.00.00.00.0

If you don’t have a test set available, as is usually the case when you’re making forecasts, it is necessary to create a future dataframe that holds the correct bottom-level unique_ids and timestamps so that they can be temporally aggregated. We can use the make_future_dataframe helper function for that.

from hierarchicalforecast.utils import make_future_dataframe
Y_test_df_te_new = make_future_dataframe(Y_train_df_te, freq="QS", h=horizon)

Y_test_df_te_new can be then used in aggregate_temporal to construct the temporally aggregated structures:

Y_test_df_te_new, S_test_df_te_new, tags_te_test_new = aggregate_temporal(df=Y_test_df_te_new, spec=spec_temporal)

And we can verify that we have the same temporally aggregated test set, except that Y_test_df_te_new doesn’t contain the ground truth values y.

Y_test_df_te
temporal_idunique_iddsy
0year-1Australia2016-10-01101484.586551
1year-2Australia2017-10-01107709.864650
2year-1Australia/ACT2016-10-012457.401367
3year-2Australia/ACT2017-10-012734.748452
4year-1Australia/ACT/Business2016-10-01754.139245
5945quarter-4Australia/Western Australia/Visiting2016-10-01787.030391
5946quarter-5Australia/Western Australia/Visiting2017-01-01702.777251
5947quarter-6Australia/Western Australia/Visiting2017-04-01642.516090
5948quarter-7Australia/Western Australia/Visiting2017-07-01646.521395
5949quarter-8Australia/Western Australia/Visiting2017-10-01813.184778
Y_test_df_te_new
temporal_idunique_idds
0year-1Australia2016-10-01
1year-2Australia2017-10-01
2year-1Australia/ACT2016-10-01
3year-2Australia/ACT2017-10-01
4year-1Australia/ACT/Business2016-10-01
5945quarter-4Australia/Western Australia/Visiting2016-10-01
5946quarter-5Australia/Western Australia/Visiting2017-01-01
5947quarter-6Australia/Western Australia/Visiting2017-04-01
5948quarter-7Australia/Western Australia/Visiting2017-07-01
5949quarter-8Australia/Western Australia/Visiting2017-10-01

3b. Computing base forecasts

Now, we need to compute base forecasts for each temporal aggregation. The following cell computes the base forecasts for each temporal aggregation in Y_train_df_te using the AutoETS model. Observe that Y_hat_df_te contains the forecasts but they are not coherent.

Note also that both frequency and horizon are different for each temporal aggregation. In this example, the lowest level has a quarterly frequency, and a horizon of 8 (constituting 2 years). The year aggregation thus has a yearly frequency with a horizon of 2.

It is of course possible to choose a different model for each level in the temporal aggregation - you can be as creative as you like!

Y_hat_dfs_te = []
id_cols = ["unique_id", "temporal_id", "ds", "y"]
# We will train a model for each temporal level
for level, temporal_ids_train in tags_te_train.items():
    # Filter the data for the level
    Y_level_train = Y_train_df_te.query("temporal_id in @temporal_ids_train")
    temporal_ids_test = tags_te_test[level]
    Y_level_test = Y_test_df_te.query("temporal_id in @temporal_ids_test")
    # For each temporal level we have a different frequency and forecast horizon
    freq_level = pd.infer_freq(Y_level_train["ds"].unique())
    horizon_level = Y_level_test["ds"].nunique()
    # Train a model and create forecasts
    fcst = StatsForecast(models=[AutoETS(model='ZZZ')], freq=freq_level, n_jobs=-1)
    Y_hat_df_te_level = fcst.forecast(df=Y_level_train[["ds", "unique_id", "y"]], h=horizon_level)
    # Add the test set to the forecast
    Y_hat_df_te_level = Y_hat_df_te_level.merge(Y_level_test, on=["ds", "unique_id"], how="left")
    # Put cols in the right order (for readability)
    Y_hat_cols = id_cols + [col for col in Y_hat_df_te_level.columns if col not in id_cols]
    Y_hat_df_te_level = Y_hat_df_te_level[Y_hat_cols]
    # Append the forecast to the list
    Y_hat_dfs_te.append(Y_hat_df_te_level)

Y_hat_df_te = pd.concat(Y_hat_dfs_te, ignore_index=True)

3c. Reconcile forecasts

We can again use the HierarchicalReconciliation class to reconcile the forecasts. In this example we use BottomUp and MinTrace. Note that we have to set temporal=True in the reconcile function.

Note that temporal reconcilation currently isn’t supported for insample reconciliation methods, such as MinTrace(method='mint_shrink').

reconcilers = [
    BottomUp(),
    MinTrace(method='ols')
]
hrec = HierarchicalReconciliation(reconcilers=reconcilers)
Y_rec_df_te = hrec.reconcile(Y_hat_df=Y_hat_df_te, S=S_test_df_te, tags=tags_te_test, temporal=True)

4. Evaluation

The HierarchicalForecast package includes the evaluate function to evaluate the different hierarchies.

from hierarchicalforecast.evaluation import evaluate
from utilsforecast.losses import rmse

4a. Cross-sectional evaluation

We first evaluate the forecasts across all cross-sectional aggregations.

eval_tags = {}
eval_tags['Total'] = tags_cs['Country']
eval_tags['Purpose'] = tags_cs['Country/Purpose']
eval_tags['State'] = tags_cs['Country/State']
eval_tags['Regions'] = tags_cs['Country/State/Region']
eval_tags['Bottom'] = tags_cs['Country/State/Region/Purpose']

evaluation = evaluate(df = Y_rec_df_te.drop(columns = 'temporal_id'),
                      tags = eval_tags,
                      metrics = [rmse])

evaluation.columns = ['level', 'metric', 'Base', 'BottomUp', 'MinTrace(ols)']
numeric_cols = evaluation.select_dtypes(include="number").columns
evaluation[numeric_cols] = evaluation[numeric_cols].map('{:.2f}'.format).astype(np.float64)
evaluation
levelmetricBaseBottomUpMinTrace(ols)
0Totalrmse4249.254461.954234.55
1Purposermse1222.571273.481137.57
2Statermse635.78546.02611.32
3Regionsrmse103.67107.0099.23
4Bottomrmse33.1533.9832.30
5Overallrmse81.8982.4178.97

As can be seen MinTrace(ols) seems to be the best forecasting method across each cross-sectional aggregation.

4b. Temporal evaluation

We then evaluate the temporally aggregated forecasts across all temporal aggregations.

evaluation = evaluate(df = Y_rec_df_te.drop(columns = 'unique_id'),
                      tags = tags_te_test,
                      metrics = [rmse],
                      id_col="temporal_id")

evaluation.columns = ['level', 'metric', 'Base', 'BottomUp', 'MinTrace(ols)']
numeric_cols = evaluation.select_dtypes(include="number").columns
evaluation[numeric_cols] = evaluation[numeric_cols].map('{:.2f}'.format).astype(np.float64)
evaluation
levelmetricBaseBottomUpMinTrace(ols)
0yearrmse480.85581.18515.32
1semiannualrmse312.33304.98275.30
2quarterrmse168.02168.02155.61
3Overallrmse253.94266.17241.19

Again, MinTrace(ols) is the best overall method, scoring the lowest rmse on the quarter aggregated forecasts, and being slightly worse than the Base forecasts on the year aggregated forecasts.

4c. Cross-temporal evaluation

Finally, we evaluate cross-temporally. To do so, we first need to obtain the combination of cross-sectional and temporal hierarchies, for which we can use the get_cross_temporal_tags helper function.

from hierarchicalforecast.utils import get_cross_temporal_tags
Y_rec_df_te, tags_ct = get_cross_temporal_tags(Y_rec_df_te, tags_cs=tags_cs, tags_te=tags_te_test)

As we can see, we now have a tag Country//year that contains Australia//year-1 and Australia//year-2, indicating the cross-sectional hierarchy Australia at the temporal hierarchies 2016 and 2017.

tags_ct["Country//year"]
['Australia//year-1', 'Australia//year-2']

We now have our dataset and cross-temporal tags ready for evaluation.

We define a set of eval_tags, and now we split each cross-sectional aggregation also by each temporal aggregation. Note that we skip the semiannual temporal aggregation in the below overview.

eval_tags = {}
eval_tags['TotalByYear'] = tags_ct['Country//year']
eval_tags['RegionsByYear'] = tags_ct['Country/State/Region//year']
eval_tags['BottomByYear'] = tags_ct['Country/State/Region/Purpose//year']
eval_tags['TotalByQuarter'] = tags_ct['Country//quarter']
eval_tags['RegionsByQuarter'] = tags_ct['Country/State/Region//quarter']
eval_tags['BottomByQuarter'] = tags_ct['Country/State/Region/Purpose//quarter']


evaluation = evaluate(df = Y_rec_df_te.drop(columns=['unique_id', 'temporal_id']),
                      tags = eval_tags,
                      id_col = 'cross_temporal_id',
                      metrics = [rmse])

evaluation.columns = ['level', 'metric', 'Base', 'BottomUp', 'MinTrace(ols)']
numeric_cols = evaluation.select_dtypes(include="number").columns
evaluation[numeric_cols] = evaluation[numeric_cols].map('{:.2f}'.format).astype(np.float64)
evaluation
levelmetricBaseBottomUpMinTrace(ols)
0TotalByYearrmse7148.998243.067748.40
1RegionsByYearrmse151.96175.69158.48
2BottomByYearrmse46.9850.7846.72
3TotalByQuarterrmse2060.772060.771942.32
4RegionsByQuarterrmse57.0757.0754.12
5BottomByQuarterrmse19.4219.4218.69
6Overallrmse43.1445.2742.49

We find that the best method is the cross-temporally reconciled method AutoETS/MinTrace_method-ols, which achieves overall lowest RMSE.

References