In many applications, a set of time series is hierarchically organized. Examples include the presence of geographic levels, products, or categories that define different types of aggregations. In such scenarios, forecasters are often required to provide predictions for all disaggregate and aggregate series. A natural desire is for those predictions to be “coherent”, that is, for the bottom series to add up precisely to the forecasts of the aggregated series.

In this notebook we present an example on how to use HierarchicalForecast to produce coherent forecasts between geographical levels. We will use the classic Australian Domestic Tourism (Tourism) dataset, which contains monthly time series of the number of visitors to each state of Australia.

We will first load the Tourism data and produce base forecasts using an ETS model from StatsForecast, and then reconciliate the forecasts with several reconciliation algorithms from HierarchicalForecast. Finally, we show the performance is comparable with the results reported by the Forecasting: Principles and Practice which uses the R package fable.

You can run these experiments using CPU or GPU with Google Colab.

!pip install hierarchicalforecast
!pip install -U statsforecast numba

1. Load and Process Data

In this example we will use the Tourism dataset from the Forecasting: Principles and Practice book.

The dataset only contains the time series at the lowest level, so we need to create the time series for all hierarchies.

import numpy as np
import pandas as pd
Y_df = pd.read_csv('https://raw.githubusercontent.com/Nixtla/transfer-learning-time-series/main/datasets/tourism.csv')
Y_df = Y_df.rename({'Trips': 'y', 'Quarter': 'ds'}, axis=1)
Y_df.insert(0, 'Country', 'Australia')
Y_df = Y_df[['Country', 'Region', 'State', 'Purpose', 'ds', 'y']]
Y_df['ds'] = Y_df['ds'].str.replace(r'(\d+) (Q\d)', r'\1-\2', regex=True)
Y_df['ds'] = pd.to_datetime(Y_df['ds'])
Y_df.head()
CountryRegionStatePurposedsy
0AustraliaAdelaideSouth AustraliaBusiness1998-01-01135.077690
1AustraliaAdelaideSouth AustraliaBusiness1998-04-01109.987316
2AustraliaAdelaideSouth AustraliaBusiness1998-07-01166.034687
3AustraliaAdelaideSouth AustraliaBusiness1998-10-01127.160464
4AustraliaAdelaideSouth AustraliaBusiness1999-01-01137.448533

The dataset can be grouped in the following non-strictly hierarchical structure.

spec = [
    ['Country'],
    ['Country', 'State'], 
    ['Country', 'Purpose'], 
    ['Country', 'State', 'Region'], 
    ['Country', 'State', 'Purpose'], 
    ['Country', 'State', 'Region', 'Purpose']
]

Using the aggregate function from HierarchicalForecast we can get the full set of time series.

from hierarchicalforecast.utils import aggregate
Y_df, S_df, tags = aggregate(Y_df, spec)
Y_df = Y_df.reset_index()
Y_df.head()
unique_iddsy
0Australia1998-01-0123182.197269
1Australia1998-04-0120323.380067
2Australia1998-07-0119826.640511
3Australia1998-10-0120830.129891
4Australia1999-01-0122087.353380
S_df.iloc[:5, :5]
Australia/ACT/Canberra/BusinessAustralia/ACT/Canberra/HolidayAustralia/ACT/Canberra/OtherAustralia/ACT/Canberra/VisitingAustralia/New South Wales/Blue Mountains/Business
Australia1.01.01.01.01.0
Australia/ACT1.01.01.01.00.0
Australia/New South Wales0.00.00.00.01.0
Australia/Northern Territory0.00.00.00.00.0
Australia/Queensland0.00.00.00.00.0
tags['Country/Purpose']
array(['Australia/Business', 'Australia/Holiday', 'Australia/Other',
       'Australia/Visiting'], dtype=object)

Split Train/Test sets

We use the final two years (8 quarters) as test set.

Y_test_df = Y_df.groupby('unique_id').tail(8)
Y_train_df = Y_df.drop(Y_test_df.index)
Y_test_df = Y_test_df.set_index('unique_id')
Y_train_df = Y_train_df.set_index('unique_id')
Y_train_df.groupby('unique_id').size()
unique_id
Australia                                                72
Australia/ACT                                            72
Australia/ACT/Business                                   72
Australia/ACT/Canberra                                   72
Australia/ACT/Canberra/Business                          72
                                                         ..
Australia/Western Australia/Experience Perth/Other       72
Australia/Western Australia/Experience Perth/Visiting    72
Australia/Western Australia/Holiday                      72
Australia/Western Australia/Other                        72
Australia/Western Australia/Visiting                     72
Length: 425, dtype: int64

2. Computing base forecasts

The following cell computes the base forecasts for each time series in Y_df using the ETS model. Observe that Y_hat_df contains the forecasts but they are not coherent.

from statsforecast.models import ETS
from statsforecast.core import StatsForecast
fcst = StatsForecast(df=Y_train_df, 
                     models=[ETS(season_length=4, model='ZZA')], 
                     freq='QS', n_jobs=-1)
Y_hat_df = fcst.forecast(h=8, fitted=True)
Y_fitted_df = fcst.forecast_fitted_values()

3. Reconcile forecasts

The following cell makes the previous forecasts coherent using the HierarchicalReconciliation class. Since the hierarchy structure is not strict, we can’t use methods such as TopDown or MiddleOut. In this example we use BottomUp and MinTrace.

from hierarchicalforecast.methods import BottomUp, MinTrace
from hierarchicalforecast.core import HierarchicalReconciliation
reconcilers = [
    BottomUp(),
    MinTrace(method='mint_shrink'),
    MinTrace(method='ols')
]
hrec = HierarchicalReconciliation(reconcilers=reconcilers)
Y_rec_df = hrec.reconcile(Y_hat_df=Y_hat_df, Y_df=Y_fitted_df, S=S_df, tags=tags)

The dataframe Y_rec_df contains the reconciled forecasts.

Y_rec_df.head()
dsETSETS/BottomUpETS/MinTrace_method-mint_shrinkETS/MinTrace_method-ols
unique_id
Australia2016-01-0125990.06835924379.67968825438.88835125894.418893
Australia2016-04-0124458.49023422902.66406223925.18854124357.230480
Australia2016-07-0123974.05664122412.98437523440.31033823865.929521
Australia2016-10-0124563.45507823127.63867224101.00183324470.783968
Australia2017-01-0125990.06835924516.17578125556.66761625901.382401

4. Evaluation

The HierarchicalForecast package includes the HierarchicalEvaluation class to evaluate the different hierarchies and also is capable of compute scaled metrics compared to a benchmark model.

from hierarchicalforecast.evaluation import HierarchicalEvaluation
def rmse(y, y_hat):
    return np.mean(np.sqrt(np.mean((y-y_hat)**2, axis=1)))

def mase(y, y_hat, y_insample, seasonality=4):
    errors = np.mean(np.abs(y - y_hat), axis=1)
    scale = np.mean(np.abs(y_insample[:, seasonality:] - y_insample[:, :-seasonality]), axis=1)
    return np.mean(errors / scale)

eval_tags = {}
eval_tags['Total'] = tags['Country']
eval_tags['Purpose'] = tags['Country/Purpose']
eval_tags['State'] = tags['Country/State']
eval_tags['Regions'] = tags['Country/State/Region']
eval_tags['Bottom'] = tags['Country/State/Region/Purpose']
eval_tags['All'] = np.concatenate(list(tags.values()))

evaluator = HierarchicalEvaluation(evaluators=[rmse, mase])
evaluation = evaluator.evaluate(
        Y_hat_df=Y_rec_df, Y_test_df=Y_test_df,
        tags=eval_tags, Y_df=Y_train_df
)
evaluation = evaluation.drop('Overall')
evaluation.columns = ['Base', 'BottomUp', 'MinTrace(mint_shrink)', 'MinTrace(ols)']
evaluation = evaluation.applymap('{:.2f}'.format)
/var/folders/rp/97y9_3ns23v01hdn0rp9ndw40000gp/T/ipykernel_46857/2768439279.py:22: PerformanceWarning: dropping on a non-lexsorted multi-index without a level parameter may impact performance.
  evaluation = evaluation.drop('Overall')

RMSE

The following table shows the performance measured using RMSE across levels for each reconciliation method.

evaluation.query('metric == "rmse"')
BaseBottomUpMinTrace(mint_shrink)MinTrace(ols)
levelmetric
Totalrmse1743.293028.932102.471818.94
Purposermse534.75791.28574.84515.53
Statermse308.15413.43315.89287.34
Regionsrmse51.6555.1446.4846.29
Bottomrmse19.3719.3717.7818.19
Allrmse45.1954.9544.5942.71

MASE

The following table shows the performance measured using MASE across levels for each reconciliation method.

evaluation.query('metric == "mase"')
BaseBottomUpMinTrace(mint_shrink)MinTrace(ols)
levelmetric
Totalmase1.593.162.051.67
Purposemase1.322.281.481.25
Statemase1.391.901.391.25
Regionsmase1.121.191.010.99
Bottommase0.980.980.941.01
Allmase1.031.080.981.02

Comparison fable

Observe that we can recover the results reported by the Forecasting: Principles and Practice. The original results were calculated using the R package fable.

References