Skip to main content
Minimal Example of Hierarchical Reconciliation
Large collections of time series organized into structures at different aggregation levels often require their forecasts to follow their aggregation constraints, which poses the challenge of creating novel algorithms capable of coherent forecasts. The HierarchicalForecast package provides a wide collection of Python implementations of hierarchical forecasting algorithms that follow classic hierarchical reconciliation. In this notebook we will show how to use the StatsForecast library to produce base forecasts, and use HierarchicalForecast package to perform hierarchical reconciliation. You can run these experiments using CPU or GPU with Google Colab. Open In Colab

1. Libraries

!pip install hierarchicalforecast statsforecast datasetsforecast

2. Load Data

In this example we will use the TourismSmall dataset. The following cell gets the time series for the different levels in the hierarchy, the summing matrix S which recovers the full dataset from the bottom level hierarchy and the indices of each hierarchy denoted by tags.
import pandas as pd

from datasetsforecast.hierarchical import HierarchicalData, HierarchicalInfo
group_name = 'TourismSmall'
group = HierarchicalInfo.get_group(group_name)
Y_df, S_df, tags = HierarchicalData.load('./data', group_name)
S_df = S_df.reset_index(names="unique_id")
Y_df['ds'] = pd.to_datetime(Y_df['ds'])
S_df.iloc[:6, :6]
unique_idnsw-hol-citynsw-hol-noncityvic-hol-cityvic-hol-noncityqld-hol-city
0total1.01.01.01.01.0
1hol1.01.01.01.01.0
2vfr0.00.00.00.00.0
3bus0.00.00.00.00.0
4oth0.00.00.00.00.0
5nsw-hol1.01.00.00.00.0
tags
{'Country': array(['total'], dtype=object),
 'Country/Purpose': array(['hol', 'vfr', 'bus', 'oth'], dtype=object),
 'Country/Purpose/State': array(['nsw-hol', 'vic-hol', 'qld-hol', 'sa-hol', 'wa-hol', 'tas-hol',
        'nt-hol', 'nsw-vfr', 'vic-vfr', 'qld-vfr', 'sa-vfr', 'wa-vfr',
        'tas-vfr', 'nt-vfr', 'nsw-bus', 'vic-bus', 'qld-bus', 'sa-bus',
        'wa-bus', 'tas-bus', 'nt-bus', 'nsw-oth', 'vic-oth', 'qld-oth',
        'sa-oth', 'wa-oth', 'tas-oth', 'nt-oth'], dtype=object),
 'Country/Purpose/State/CityNonCity': array(['nsw-hol-city', 'nsw-hol-noncity', 'vic-hol-city',
        'vic-hol-noncity', 'qld-hol-city', 'qld-hol-noncity',
        'sa-hol-city', 'sa-hol-noncity', 'wa-hol-city', 'wa-hol-noncity',
        'tas-hol-city', 'tas-hol-noncity', 'nt-hol-city', 'nt-hol-noncity',
        'nsw-vfr-city', 'nsw-vfr-noncity', 'vic-vfr-city',
        'vic-vfr-noncity', 'qld-vfr-city', 'qld-vfr-noncity',
        'sa-vfr-city', 'sa-vfr-noncity', 'wa-vfr-city', 'wa-vfr-noncity',
        'tas-vfr-city', 'tas-vfr-noncity', 'nt-vfr-city', 'nt-vfr-noncity',
        'nsw-bus-city', 'nsw-bus-noncity', 'vic-bus-city',
        'vic-bus-noncity', 'qld-bus-city', 'qld-bus-noncity',
        'sa-bus-city', 'sa-bus-noncity', 'wa-bus-city', 'wa-bus-noncity',
        'tas-bus-city', 'tas-bus-noncity', 'nt-bus-city', 'nt-bus-noncity',
        'nsw-oth-city', 'nsw-oth-noncity', 'vic-oth-city',
        'vic-oth-noncity', 'qld-oth-city', 'qld-oth-noncity',
        'sa-oth-city', 'sa-oth-noncity', 'wa-oth-city', 'wa-oth-noncity',
        'tas-oth-city', 'tas-oth-noncity', 'nt-oth-city', 'nt-oth-noncity'],
       dtype=object)}
We split the dataframe in train/test splits.
Y_test_df = Y_df.groupby('unique_id').tail(group.horizon)
Y_train_df = Y_df.drop(Y_test_df.index)

3. Base forecasts

The following cell computes the base forecast for each time series using the auto_arima and naive models. Observe that Y_hat_df contains the forecasts but they are not coherent.
from statsforecast.core import StatsForecast
from statsforecast.models import AutoARIMA, Naive
/home/osprangers/Repositories/hierarchicalforecast/.venv/lib/python3.10/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html
  from .autonotebook import tqdm as notebook_tqdm
fcst = StatsForecast(
    models=[AutoARIMA(season_length=group.seasonality), Naive()], 
    freq="QE", 
    n_jobs=-1
)
Y_hat_df = fcst.forecast(df=Y_train_df, h=group.horizon)

4. Hierarchical reconciliation

The following cell makes the previous forecasts coherent using the HierarchicalReconciliation class. The used methods to make the forecasts coherent are:
  • BottomUp: The reconciliation of the method is a simple addition to the upper levels.
  • TopDown: The second method constrains the base-level predictions to the top-most aggregate-level serie and then distributes it to the disaggregate series through the use of proportions.
  • MiddleOut: Anchors the base predictions in a middle level.
from hierarchicalforecast.core import HierarchicalReconciliation
from hierarchicalforecast.methods import BottomUp, TopDown, MiddleOut
reconcilers = [
    BottomUp(),
    TopDown(method='forecast_proportions'),
    TopDown(method='proportion_averages'),
    MiddleOut(middle_level="Country/Purpose/State", top_down_method="proportion_averages"),
]
hrec = HierarchicalReconciliation(reconcilers=reconcilers)
Y_rec_df = hrec.reconcile(Y_hat_df=Y_hat_df, Y_df=Y_train_df, S_df=S_df, tags=tags)

4.1 Coherence Diagnostics

The reconcile method supports an optional diagnostics=True parameter that computes a detailed report showing how reconciliation changed the forecasts. This is useful for:
  • Verifying that base forecasts were incoherent and reconciliation fixed them
  • Understanding which hierarchy levels were adjusted the most
  • Detecting if reconciliation introduced negative values
  • Confirming numerical coherence within tolerance
# Run reconciliation with diagnostics enabled
hrec_diag = HierarchicalReconciliation(reconcilers=[BottomUp(), TopDown(method='forecast_proportions')])
Y_rec_diag_df = hrec_diag.reconcile(
    Y_hat_df=Y_hat_df, 
    Y_df=Y_train_df, 
    S_df=S_df, 
    tags=tags,
    diagnostics=True  # Enable coherence diagnostics
)
The diagnostics are stored in hrec.diagnostics as a DataFrame with metrics per hierarchical level:
# View the full diagnostics report
hrec_diag.diagnostics
levelmetricAutoARIMA/BottomUpNaive/BottomUpAutoARIMA/TopDown_method-forecast_proportionsNaive/TopDown_method-forecast_proportions
0Countrycoherence_residual_mae_before1551.1548580.01.551155e+030.0
1Countrycoherence_residual_rmse_before1823.5663380.01.823566e+030.0
2Countrycoherence_residual_mae_after0.0000000.07.275958e-120.0
3Countrycoherence_residual_rmse_after0.0000000.01.455192e-110.0
4Countryadjustment_mae1551.1548580.00.000000e+000.0
57Overallnegative_count_after0.0000000.00.000000e+000.0
58Overallnegative_introduced0.0000000.00.000000e+000.0
59Overallnegative_removed0.0000000.00.000000e+000.0
60Overallis_coherent1.0000001.01.000000e+001.0
61Overallcoherence_max_violation0.0000000.02.910383e-110.0
Key metrics explained:
  • coherence_residual_mae_before: Mean absolute incoherence in base forecasts (should be > 0 if base forecasts are incoherent)
  • coherence_residual_mae_after: Mean absolute incoherence after reconciliation (should be ~0)
  • adjustment_mae/rmse/max: How much forecasts were adjusted by reconciliation
  • negative_count_before/after: Count of negative forecast values
  • is_coherent: Whether the reconciled forecasts satisfy the hierarchical constraints (1.0 = yes)
Let’s filter to see just the coherence verification:
# Check coherence metrics at the Overall level
coherence_check = hrec_diag.diagnostics.query(
    "level == 'Overall' and metric in ['coherence_residual_mae_before', 'coherence_residual_mae_after', 'is_coherent', 'coherence_max_violation']"
)
coherence_check
levelmetricAutoARIMA/BottomUpNaive/BottomUpAutoARIMA/TopDown_method-forecast_proportionsNaive/TopDown_method-forecast_proportions
48Overallcoherence_residual_mae_before91.1236920.09.112369e+010.0
50Overallcoherence_residual_mae_after0.0000000.02.119653e-130.0
60Overallis_coherent1.0000001.01.000000e+001.0
61Overallcoherence_max_violation0.0000000.02.910383e-110.0
We can also see which levels required the largest adjustments:
# Compare adjustment magnitude across levels
adjustment_by_level = hrec_diag.diagnostics.query("metric == 'adjustment_mae'")
adjustment_by_level
levelmetricAutoARIMA/BottomUpNaive/BottomUpAutoARIMA/TopDown_method-forecast_proportionsNaive/TopDown_method-forecast_proportions
4Countryadjustment_mae1551.1548580.00.0000000.0
16Country/Purposeadjustment_mae996.8591180.01106.7961430.0
28Country/Purpose/Stateadjustment_mae91.8363290.0151.2482390.0
40Country/Purpose/State/CityNonCityadjustment_mae0.0000000.087.4972790.0
52Overalladjustment_mae91.1236920.0152.3818300.0

5. Evaluation

The HierarchicalForecast package includes the evaluate function to evaluate the different hierarchies and we can use utilsforecast to compute the mean absolute error relative to a baseline model.
from hierarchicalforecast.evaluation import evaluate
from utilsforecast.losses import mse
df = Y_rec_df.merge(Y_test_df, on=['unique_id', 'ds'])
evaluation = evaluate(df = df,
                      tags = tags,
                      train_df = Y_train_df,
                      metrics = [mse],
                      benchmark="Naive")

evaluation.set_index(["level", "metric"]).filter(like="ARIMA", axis=1)
AutoARIMAAutoARIMA/BottomUpAutoARIMA/TopDown_method-forecast_proportionsAutoARIMA/TopDown_method-proportion_averagesAutoARIMA/MiddleOut_middle_level-Country/Purpose/State_top_down_method-proportion_averages
levelmetric
Countrymse-scaled0.1231610.0552640.1231610.1231610.079278
Country/Purposemse-scaled0.1710630.0776880.1015700.1281510.104186
Country/Purpose/Statemse-scaled0.1943830.1491630.2017380.3278540.194383
Country/Purpose/State/CityNonCitymse-scaled0.1703730.1703730.2100600.3413650.225656
Overallmse-scaled0.1549120.0853420.1313080.1682690.115569

References