You can run these experiments using CPU or GPU with Google Colab.

Open In Colab

1. Hierarchical Series

In many applications, a set of time series is hierarchically organized. Examples include the presence of geographic levels, products, or categories that define different types of aggregations.

In such scenarios, forecasters are often required to provide predictions for all disaggregate and aggregate series. A natural desire is for those predictions to be “coherent”, that is, for the bottom series to add up precisely to the forecasts of the aggregated series.

Figure 1. shows a simple hierarchical structure where we have four bottom-level series, two middle-level series, and the top level representing the total aggregation. Its hierarchical aggregations or coherency constraints are:

Luckily these constraints can be compactly expressed with the following matrices:

where A[a,b][b]\mathbf{A}_{[a,b][b]} aggregates the bottom series to the upper levels, and I[b][b]\mathbf{I}_{\mathrm{[b][b]}} is an identity matrix. The representation of the hierarchical series is then:

To visualize an example, in Figure 2. One can think of the hierarchical time series structure levels to represent different geographical aggregations. For example, in Figure 2. the top level is the total aggregation of series within a country, the middle level being its states and the bottom level its regions.

2. Hierarchical Forecast

To achieve “coherency”, most statistical solutions to the hierarchical forecasting challenge implement a two-stage reconciliation process.

  1. First, we obtain a set of the base forecast y^[a,b],τ\mathbf{\hat{y}}_{[a,b],\tau} 2. Later, we reconcile them into coherent forecasts y~[a,b],τ\mathbf{\tilde{y}}_{[a,b],\tau}.

Most hierarchical reconciliation methods can be expressed by the following transformations:

The HierarchicalForecast library offers a Python collection of reconciliation methods, datasets, evaluation and visualization tools for the task. Among its available reconciliation methods we have BottomUp, TopDown, MiddleOut, MinTrace, ERM. Among its probabilistic coherent methods we have Normality, Bootstrap, PERMBU.

3. Minimal Example

!pip install hierarchicalforecast
!pip install -U numba statsforecast datasetsforecast

Wrangling Data

import numpy as np
import pandas as pd

We are going to creat a synthetic data set to illustrate a hierarchical time series structure like the one in Figure 1.

We will create a two level structure with four bottom series where aggregations of the series are self evident.

# Create Figure 1. synthetic bottom data
ds = pd.date_range(start='2000-01-01', end='2000-08-01', freq='MS')
y_base = np.arange(1,9)
r1 = y_base * (10**1)
r2 = y_base * (10**1)
r3 = y_base * (10**2)
r4 = y_base * (10**2)

ys = np.concatenate([r1, r2, r3, r4])
ds = np.tile(ds, 4)
unique_ids = ['r1'] * 8 + ['r2'] * 8 + ['r3'] * 8 + ['r4'] * 8
top_level = 'Australia'
middle_level = ['State1'] * 16 + ['State2'] * 16
bottom_level = unique_ids

bottom_df = dict(ds=ds,
                 top_level=top_level, 
                 middle_level=middle_level, 
                 bottom_level=bottom_level,
                 y=ys)
bottom_df = pd.DataFrame(bottom_df)
bottom_df.groupby('bottom_level').head(2)
dstop_levelmiddle_levelbottom_levely
02000-01-01AustraliaState1r110
12000-02-01AustraliaState1r120
82000-01-01AustraliaState1r210
92000-02-01AustraliaState1r220
162000-01-01AustraliaState2r3100
172000-02-01AustraliaState2r3200
242000-01-01AustraliaState2r4100
252000-02-01AustraliaState2r4200

The previously introduced hierarchical series y[a,b]τ\mathbf{y}_{[a,b]\tau} is captured within the Y_hier_df dataframe.

The aggregation constraints matrix S[a][b]\mathbf{S}_{[a][b]} is captured within the S_df dataframe.

Finally the tags contains a list within Y_hier_df composing each hierarchical level, for example the tags['top_level'] contains Australia’s aggregated series index.

from hierarchicalforecast.utils import aggregate
# Create hierarchical structure and constraints
hierarchy_levels = [['top_level'],
                    ['top_level', 'middle_level'],
                    ['top_level', 'middle_level', 'bottom_level']]
Y_hier_df, S_df, tags = aggregate(df=bottom_df, spec=hierarchy_levels)
Y_hier_df = Y_hier_df.reset_index()
print('S_df.shape', S_df.shape)
print('Y_hier_df.shape', Y_hier_df.shape)
print("tags['top_level']", tags['top_level'])
S_df.shape (7, 4)
Y_hier_df.shape (56, 3)
tags['top_level'] ['Australia']
/Users/cchallu/opt/anaconda3/envs/hierarchicalforecast/lib/python3.10/site-packages/sklearn/preprocessing/_encoders.py:828: FutureWarning: `sparse` was renamed to `sparse_output` in version 1.2 and will be removed in 1.4. `sparse_output` is ignored unless you leave `sparse` to its default value.
  warnings.warn(
Y_hier_df.groupby('unique_id').head(2)
unique_iddsy
0Australia2000-01-01220.0
1Australia2000-02-01440.0
8Australia/State12000-01-0120.0
9Australia/State12000-02-0140.0
16Australia/State22000-01-01200.0
17Australia/State22000-02-01400.0
24Australia/State1/r12000-01-0110.0
25Australia/State1/r12000-02-0120.0
32Australia/State1/r22000-01-0110.0
33Australia/State1/r22000-02-0120.0
40Australia/State2/r32000-01-01100.0
41Australia/State2/r32000-02-01200.0
48Australia/State2/r42000-01-01100.0
49Australia/State2/r42000-02-01200.0
S_df
Australia/State1/r1Australia/State1/r2Australia/State2/r3Australia/State2/r4
Australia1.01.01.01.0
Australia/State11.01.00.00.0
Australia/State20.00.01.01.0
Australia/State1/r11.00.00.00.0
Australia/State1/r20.01.00.00.0
Australia/State2/r30.00.01.00.0
Australia/State2/r40.00.00.01.0

Base Predictions

Next, we compute the base forecast for each time series using the naive model. Observe that Y_hat_df contains the forecasts but they are not coherent.

from statsforecast.models import Naive
from statsforecast.core import StatsForecast
# Split train/test sets
Y_test_df  = Y_hier_df.groupby('unique_id').tail(4)
Y_train_df = Y_hier_df.drop(Y_test_df.index)

# Compute base Naive predictions
# Careful identifying correct data freq, this data quarterly 'Q'
fcst = StatsForecast(df=Y_train_df,
                     models=[Naive()],
                     freq='Q', n_jobs=-1)
Y_hat_df = fcst.forecast(h=4, fitted=True)
Y_fitted_df = fcst.forecast_fitted_values()

Reconciliation

from hierarchicalforecast.methods import BottomUp
from hierarchicalforecast.core import HierarchicalReconciliation
# You can select a reconciler from our collection
reconcilers = [BottomUp()] # MinTrace(method='mint_shrink')
hrec = HierarchicalReconciliation(reconcilers=reconcilers)

Y_rec_df = hrec.reconcile(Y_hat_df=Y_hat_df, 
                          Y_df=Y_fitted_df,
                          S=S_df, tags=tags)
Y_rec_df.groupby('unique_id').head(2)
dsNaiveNaive/BottomUp
unique_id
Australia2000-06-30880.0880.0
Australia2000-09-30880.0880.0
Australia/State12000-06-3080.080.0
Australia/State12000-09-3080.080.0
Australia/State22000-06-30800.0800.0
Australia/State22000-09-30800.0800.0
Australia/State1/r12000-06-3040.040.0
Australia/State1/r12000-09-3040.040.0
Australia/State1/r22000-06-3040.040.0
Australia/State1/r22000-09-3040.040.0
Australia/State2/r32000-06-30400.0400.0
Australia/State2/r32000-09-30400.0400.0
Australia/State2/r42000-06-30400.0400.0
Australia/State2/r42000-09-30400.0400.0

References