Practical tips to improve performance when reconciling large hierarchiesWhen working with hierarchies containing thousands or millions of time series, default settings can lead to slow reconciliation and high memory usage. This guide covers concrete steps to improve performance. We will cover: 1. Using Polars instead of Pandas 2. Telling the library your data is balanced 3. Using sparse reconciliation methods 4. Choosing the right reconciliation method for your scale 5. Parallelizing non-negative reconciliation 6. Avoiding unnecessary computation 7. Profiling your pipeline
1. Libraries
2. Load Data
We use theTourismSmall dataset for demonstration. The same principles
apply to much larger hierarchies.
3. Base Forecasts
4. Performance Tips
Tip 1: Use Polars instead of Pandas
HierarchicalForecast supports both Pandas and Polars DataFrames
transparently via Narwhals.
Polars is generally faster for the DataFrame operations used internally
(sorting, joining, pivoting) and uses less memory due to Apache Arrow
columnar storage.
Simply pass Polars DataFrames to reconcile() — no code changes needed
beyond the data loading step:
aggregate() to build your hierarchy, pass a Polars
DataFrame directly:
sparse_s=True option in aggregate() is currently only
available for Pandas DataFrames. If you need sparse S matrices with
Polars, pass a Polars DataFrame without sparse_s — the core
reconciliation will still construct sparse matrices internally when
using sparse methods.
Tip 2: Set is_balanced=True
If all your time series have the same number of observations (same start
and end dates), set is_balanced=True in reconcile(). This skips an
expensive pivot() operation and uses a fast reshape instead.
aggregate(), the result is always balanced.
Tip 3: Use Sparse Reconciliation Methods
For large hierarchies, the dense summing matrix S (shape:n_hiers × n_bottom) can consume significant memory and slow down
matrix operations. Sparse methods use scipy.sparse matrices and
iterative solvers instead of dense linear algebra.
The library provides sparse variants of the main methods:
| Dense Method | Sparse Variant | Notes |
|---|---|---|
BottomUp() | BottomUpSparse() | Drop-in replacement |
TopDown(...) | TopDownSparse(...) | Strictly hierarchical data only |
MiddleOut(...) | MiddleOutSparse(...) | Strictly hierarchical data only |
MinTrace(method='ols') | MinTraceSparse(method='ols') | Uses iterative bicgstab solver |
MinTrace(method='wls_struct') | MinTraceSparse(method='wls_struct') | Uses iterative bicgstab solver |
MinTrace(method='wls_var') | MinTraceSparse(method='wls_var') | Uses iterative bicgstab solver |
MinTraceSparse constructs a scipy.sparse.linalg.LinearOperator for
the projection matrix P and solves the system iteratively with
bicgstab, avoiding materialization of the full dense P matrix.
When to use sparse methods: When your hierarchy has thousands of
bottom-level series or more. For small hierarchies (< 500 series), the
overhead of sparse operations may negate the savings.
Tip 4: Choose the Right Reconciliation Method
Reconciliation methods vary significantly in computational cost. Here is a rough ranking from fastest to slowest:| Speed | Method | Requires insample data? | Notes |
|---|---|---|---|
| Fastest | BottomUp / BottomUpSparse | No | Simple aggregation, no optimization |
| Fast | TopDown / TopDownSparse | Depends on variant | Strictly hierarchical only |
| Fast | MinTrace(method='ols') | No | Closed-form, np.linalg.solve |
| Fast | MinTrace(method='wls_struct') | No | Closed-form, weighted by hierarchy structure |
| Medium | MinTraceSparse(method='ols'/'wls_struct') | No | Iterative solver; better for very large S |
| Medium | MinTrace(method='wls_var') | Yes | Diagonal covariance from residuals |
| Slow | MinTrace(method='mint_shrink') | Yes | Full covariance with shrinkage (Numba-accelerated) |
| Slow | MinTrace(method='mint_cov') | Yes | Full empirical covariance (Numba-accelerated) |
| Slow | ERM(method='closed') | Yes | Pseudoinverse on large matrices |
| Slowest | ERM(method='reg') | Yes | Lasso coordinate descent (iterative) |
| +Cost | Any method with nonnegative=True | — | Adds a QP solve per horizon step |
BottomUp and MinTrace(method='ols') are the
cheapest and don’t require insample data (Y_df). - Methods requiring
insample residuals (wls_var, mint_shrink, mint_cov) need the
Y_df argument passed to reconcile(). Omitting Y_df when not needed
saves computation. - For very large hierarchies, prefer MinTraceSparse
over MinTrace to avoid dense matrix operations. - mint_shrink and
mint_cov compute a full n×n covariance matrix — this scales
quadratically with the number of series.
Tip 5: Parallelize Non-Negative Reconciliation
When usingnonnegative=True, a quadratic programming (QP) problem is
solved for each forecast horizon step. Use num_threads to parallelize
these independent QP calls:
num_threads parameter controls the ThreadPoolExecutor pool size.
Set it to the number of available CPU cores. Note that num_threads
only takes effect when nonnegative=True.
MinTraceSparse also offers a qp parameter (default True). Setting
qp=False replaces the full QP solve with simple clipping of negative
values — much faster but lower quality:
Tip 6: Avoid Unnecessary Computation
Skip probabilistic forecasts when not needed. If you only need point forecasts, don’t pass thelevel parameter:
intervals_method='normality' is the cheapest
option. The 'bootstrap' and 'permbu' methods require many
re-evaluations and are significantly slower.
Skip diagnostics in production. The diagnostics=True flag computes
coherence checks across all forecasts — useful for debugging but adds
overhead:
BottomUp,
TopDown(method='forecast_proportions'), MinTrace(method='ols'), and
MinTrace(method='wls_struct') don’t use insample data. Omitting Y_df
skips the data preparation for insample values:
Tip 7: Profile Your Pipeline
HierarchicalForecast provides built-in profiling tools to identify
bottlenecks.
execution_times attribute: After calling reconcile(), the
HierarchicalReconciliation instance exposes a dictionary of per-method
execution times:
CodeTimer context manager: Use this to time any block of code in
your pipeline:
5. Summary Checklist
Here is a quick checklist to optimize your hierarchical forecasting pipeline:| Tip | Action | Impact |
|---|---|---|
| Use Polars | Pass Polars DataFrames instead of Pandas | Faster DataFrame ops, lower memory |
is_balanced=True | Set when all series have equal length | Skips expensive pivot |
| Sparse methods | Use BottomUpSparse, MinTraceSparse, etc. | Less memory, iterative solvers |
| Right method | Start with BottomUp or MinTrace(method='ols') | Avoid unnecessary covariance computation |
num_threads | Set >1 when using nonnegative=True | Parallel QP solves |
| Skip intervals | Omit level parameter for point forecasts | Avoids sampling/interval computation |
Skip Y_df | Omit when reconciler doesn’t need insample data | Skips data preparation |
| Profile | Check hrec.execution_times and use CodeTimer | Find bottlenecks |
References
- Wickramasuriya, S. L., Athanasopoulos, G., & Hyndman, R. J. (2019). “Optimal forecast reconciliation for hierarchical and grouped time series through trace minimization”. Journal of the American Statistical Association, 114, 804-819.
- Wickramasuriya, S.L., Turlach, B.A. & Hyndman, R.J. (2020). “Optimal non-negative forecast reconciliation”. Stat Comput 30, 1167-1182.
- Hyndman, R.J., & Athanasopoulos, G. (2021). “Forecasting: principles and practice, 3rd edition: Chapter 11: Forecasting hierarchical and grouped series.”. OTexts: Melbourne, Australia.

