Large collections of time series organized into structures at different aggregation levels often require their forecasts to follow their aggregation constraints, which poses the challenge of creating novel algorithms capable of coherent forecasts.

The HierarchicalForecast package provides the most comprehensive collection of Python implementations of hierarchical forecasting algorithms that follow classic hierarchical reconciliation. All the methods have a reconcile function capable of reconcile base forecasts using numpy arrays.

Most reconciliation methods can be described by the following convenient linear algebra notation:

y~[a,b],τ=S[a,b][b]P[b][a,b]y^[a,b],τ\tilde{\mathbf{y}}_{[a,b],\tau} = \mathbf{S}_{[a,b][b]} \mathbf{P}_{[b][a,b]} \hat{\mathbf{y}}_{[a,b],\tau}

where a,ba, b represent the aggregate and bottom levels, S[a,b][b]\mathbf{S}_{[a,b][b]} contains the hierarchical aggregation constraints, and P[b][a,b]\mathbf{P}_{[b][a,b]} varies across reconciliation methods. The reconciled predictions are y~[a,b],τ\tilde{\mathbf{y}}_{[a,b],\tau}, and the base predictions y^[a,b],τ\hat{\mathbf{y}}_{[a,b],\tau}.

1. Bottom-Up


source

BottomUpSparse

 BottomUpSparse ()

*BottomUpSparse Reconciliation Class.

This is the implementation of a Bottom Up reconciliation using the sparse matrix approach. It works much more efficient on datasets with many time series. [makoren: At least I hope so, I only checked up until ~20k time series, and there’s no real improvement, it would be great to check for smth like 1M time series, where the dense S matrix really stops fitting in memory]

See the parent class for more details.*


source

BottomUp

 BottomUp ()

*Bottom Up Reconciliation Class. The most basic hierarchical reconciliation is performed using an Bottom-Up strategy. It was proposed for the first time by Orcutt in 1968. The corresponding hierarchical “projection” matrix is defined as: PBU=[0[b],[a]    I[b][b]]\mathbf{P}_{\text{BU}} = [\mathbf{0}_{\mathrm{[b],[a]}}\;|\;\mathbf{I}_{\mathrm{[b][b]}}]

Parameters:
None

References:
- Orcutt, G.H., Watts, H.W., & Edwards, J.B.(1968). “Data aggregation and information loss”. The American Economic Review, 58 , 773(787).*


source

BottomUp.fit

 BottomUp.fit (S:numpy.ndarray, y_hat:numpy.ndarray,
               idx_bottom:numpy.ndarray,
               y_insample:Optional[numpy.ndarray]=None,
               y_hat_insample:Optional[numpy.ndarray]=None,
               sigmah:Optional[numpy.ndarray]=None,
               intervals_method:Optional[str]=None,
               num_samples:Optional[int]=None, seed:Optional[int]=None,
               tags:Optional[Dict[str,numpy.ndarray]]=None)

*Bottom Up Fit Method.

Parameters:
S: Summing matrix of size (base, bottom).
y_hat: Forecast values of size (base, horizon).
idx_bottom: Indices corresponding to the bottom level of S, size (bottom).
level: float list 0-100, confidence levels for prediction intervals.
intervals_method: Sampler for prediction intevals, one of normality, bootstrap, permbu.
**sampler_kwargs: Coherent sampler instantiation arguments.

Returns:
self: object, fitted reconciler.*


source

BottomUp.predict

 BottomUp.predict (S:numpy.ndarray, y_hat:numpy.ndarray,
                   level:Optional[List[int]]=None)

*Predict using reconciler.

Predict using fitted mean and probabilistic reconcilers.

Parameters:
S: Summing matrix of size (base, bottom).
y_hat: Forecast values of size (base, horizon).
level: float list 0-100, confidence levels for prediction intervals.

Returns:
y_tilde: Reconciliated predictions.*


source

BottomUp.fit_predict

 BottomUp.fit_predict (S:numpy.ndarray, y_hat:numpy.ndarray,
                       idx_bottom:numpy.ndarray,
                       y_insample:Optional[numpy.ndarray]=None,
                       y_hat_insample:Optional[numpy.ndarray]=None,
                       sigmah:Optional[numpy.ndarray]=None,
                       level:Optional[List[int]]=None,
                       intervals_method:Optional[str]=None,
                       num_samples:Optional[int]=None,
                       seed:Optional[int]=None,
                       tags:Optional[Dict[str,numpy.ndarray]]=None)

*BottomUp Reconciliation Method.

Parameters:
S: Summing matrix of size (base, bottom).
y_hat: Forecast values of size (base, horizon).
idx_bottom: Indices corresponding to the bottom level of S, size (bottom).
level: float list 0-100, confidence levels for prediction intervals.
intervals_method: Sampler for prediction intevals, one of normality, bootstrap, permbu.
**sampler_kwargs: Coherent sampler instantiation arguments.

Returns:
y_tilde: Reconciliated y_hat using the Bottom Up approach.*


source

BottomUp.sample

 BottomUp.sample (num_samples:int)

*Sample probabilistic coherent distribution.

Generates n samples from a probabilistic coherent distribution. The method uses fitted mean and probabilistic reconcilers, defined by the intervals_method selected during the reconciler’s instantiation. Currently available: normality, bootstrap, permbu.

Parameters:
num_samples: int, number of samples generated from coherent distribution.

Returns:
samples: Coherent samples of size (num_series, horizon, num_samples).*

2. Top-Down


source

TopDownSparse

 TopDownSparse (method:str)

*TopDownSparse Reconciliation Class.

This is an implementation of top-down reconciliation using the sparse matrix approach. It works much more efficiently on data sets with many time series.

See the parent class for more details.*


source

TopDown

 TopDown (method:str)

*Top Down Reconciliation Class.

The Top Down hierarchical reconciliation method, distributes the total aggregate predictions and decomposes it down the hierarchy using proportions p[b]\mathbf{p}_{\mathrm{[b]}} that can be actual historical values or estimated.

P=[p[b]    0[b][a,b  1]]\mathbf{P}=[\mathbf{p}_{\mathrm{[b]}}\;|\;\mathbf{0}_{\mathrm{[b][a,b\;-1]}}] Parameters:
method: One of forecast_proportions, average_proportions and proportion_averages.

References:
- CW. Gross (1990). “Disaggregation methods to expedite product line forecasting”. Journal of Forecasting, 9 , 233–254. doi:10.1002/for.3980090304.
- G. Fliedner (1999). “An investigation of aggregate variable time series forecast strategies with specific subaggregate time series statistical correlation”. Computers and Operations Research, 26 , 1133–1149. doi:10.1016/S0305-0548(99)00017-9.*


source

TopDown.fit

 TopDown.fit (S, y_hat, y_insample:numpy.ndarray,
              y_hat_insample:Optional[numpy.ndarray]=None,
              sigmah:Optional[numpy.ndarray]=None,
              intervals_method:Optional[str]=None,
              num_samples:Optional[int]=None, seed:Optional[int]=None,
              tags:Optional[Dict[str,numpy.ndarray]]=None,
              idx_bottom:Optional[numpy.ndarray]=None)

*TopDown Fit Method.

Parameters:
S: Summing matrix of size (base, bottom).
y_hat: Forecast values of size (base, horizon).
tags: Each key is a level and each value its S indices.
y_insample: Insample values of size (base, insample_size). Optional for forecast_proportions method.
idx_bottom: Indices corresponding to the bottom level of S, size (bottom).
level: float list 0-100, confidence levels for prediction intervals.
intervals_method: Sampler for prediction intevals, one of normality, bootstrap, permbu.
**sampler_kwargs: Coherent sampler instantiation arguments.

Returns:
self: object, fitted reconciler.*


source

TopDown.predict

 TopDown.predict (S:numpy.ndarray, y_hat:numpy.ndarray,
                  level:Optional[List[int]]=None)

*Predict using reconciler.

Predict using fitted mean and probabilistic reconcilers.

Parameters:
S: Summing matrix of size (base, bottom).
y_hat: Forecast values of size (base, horizon).
level: float list 0-100, confidence levels for prediction intervals.

Returns:
y_tilde: Reconciliated predictions.*


source

TopDown.fit_predict

 TopDown.fit_predict (S:numpy.ndarray, y_hat:numpy.ndarray,
                      tags:Dict[str,numpy.ndarray],
                      idx_bottom:numpy.ndarray=None,
                      y_insample:Optional[numpy.ndarray]=None,
                      y_hat_insample:Optional[numpy.ndarray]=None,
                      sigmah:Optional[numpy.ndarray]=None,
                      level:Optional[List[int]]=None,
                      intervals_method:Optional[str]=None,
                      num_samples:Optional[int]=None,
                      seed:Optional[int]=None)

*Top Down Reconciliation Method.

Parameters:
S: Summing matrix of size (base, bottom).
y_hat: Forecast values of size (base, horizon).
tags: Each key is a level and each value its S indices.
y_insample: Insample values of size (base, insample_size). Optional for forecast_proportions method.
idx_bottom: Indices corresponding to the bottom level of S, size (bottom).
level: float list 0-100, confidence levels for prediction intervals.
intervals_method: Sampler for prediction intevals, one of normality, bootstrap, permbu.
**sampler_kwargs: Coherent sampler instantiation arguments.

Returns:
y_tilde: Reconciliated y_hat using the Top Down approach.*


source

TopDown.sample

 TopDown.sample (num_samples:int)

*Sample probabilistic coherent distribution.

Generates n samples from a probabilistic coherent distribution. The method uses fitted mean and probabilistic reconcilers, defined by the intervals_method selected during the reconciler’s instantiation. Currently available: normality, bootstrap, permbu.

Parameters:
num_samples: int, number of samples generated from coherent distribution.

Returns:
samples: Coherent samples of size (num_series, horizon, num_samples).*

3. Middle-Out


source

MiddleOutSparse

 MiddleOutSparse (middle_level:str, top_down_method:str)

*MiddleOutSparse Reconciliation Class.

This is an implementation of middle-out reconciliation using the sparse matrix approach. It works much more efficiently on data sets with many time series.

See the parent class for more details.*


source

MiddleOut

 MiddleOut (middle_level:str, top_down_method:str)

*Middle Out Reconciliation Class.

This method is only available for strictly hierarchical structures. It anchors the base predictions in a middle level. The levels above the base predictions use the Bottom-Up approach, while the levels below use a Top-Down.

Parameters:
middle_level: Middle level.
top_down_method: One of forecast_proportions, average_proportions and proportion_averages.

References:
- Hyndman, R.J., & Athanasopoulos, G. (2021). “Forecasting: principles and practice, 3rd edition: Chapter 11: Forecasting hierarchical and grouped series.”. OTexts: Melbourne, Australia. OTexts.com/fpp3 Accessed on July 2022.*


source

MiddleOut.fit_predict

 MiddleOut.fit_predict (S:numpy.ndarray, y_hat:numpy.ndarray,
                        tags:Dict[str,numpy.ndarray],
                        y_insample:Optional[numpy.ndarray]=None,
                        level:Optional[List[int]]=None,
                        intervals_method:Optional[str]=None)

*Middle Out Reconciliation Method.

Parameters:
S: Summing matrix of size (base, bottom).
y_hat: Forecast values of size (base, horizon).
tags: Each key is a level and each value its S indices.
y_insample: Insample values of size (base, insample_size). Only used for forecast_proportions

Returns:
y_tilde: Reconciliated y_hat using the Middle Out approach.*

4. Min-Trace


source

MinTraceSparse

 MinTraceSparse (method:str, nonnegative:bool=False,
                 mint_shr_ridge:Optional[float]=2e-08, num_threads:int=1)

*MinTraceSparse Reconciliation Class.

This is the implementation of a subset of MinTrace features using the sparse matrix approach. It works much more efficient on datasets with many time series.

See the parent class for more details.

Currently supported: * Methods using diagonal W matrix, i.e. “ols”, “wls_struct”, “wls_var”, * The standard MinT version (non-negative is not supported).

Note: due to the numerical instability of the matrix inversion when creating the P matrix, the method is NOT guaranteed to give identical results to the non-sparse version.*


source

MinTrace

 MinTrace (method:str, nonnegative:bool=False,
           mint_shr_ridge:Optional[float]=2e-08, num_threads:int=1)

*MinTrace Reconciliation Class.

This reconciliation algorithm proposed by Wickramasuriya et al. depends on a generalized least squares estimator and an estimator of the covariance matrix of the coherency errors Wh\mathbf{W}_{h}. The Min Trace algorithm minimizes the squared errors for the coherent forecasts under an unbiasedness assumption; the solution has a closed form.

PMinT=(SWhS)1SWh1 \mathbf{P}_{\text{MinT}}=\left(\mathbf{S}^{\intercal}\mathbf{W}_{h}\mathbf{S}\right)^{-1} \mathbf{S}^{\intercal}\mathbf{W}^{-1}_{h}

Parameters:
method: str, one of ols, wls_struct, wls_var, mint_shrink, mint_cov.
nonnegative: bool, reconciled forecasts should be nonnegative?
mint_shr_ridge: float=2e-8, ridge numeric protection to MinTrace-shr covariance estimator.
num_threads: int=1, number of threads to use for solving the optimization problems (when nonnegative=True).

References:
- Wickramasuriya, S. L., Athanasopoulos, G., & Hyndman, R. J. (2019). “Optimal forecast reconciliation for hierarchical and grouped time series through trace minimization”. Journal of the American Statistical Association, 114 , 804–819. doi:10.1080/01621459.2018.1448825.. - Wickramasuriya, S.L., Turlach, B.A. & Hyndman, R.J. (2020). “Optimal non-negative forecast reconciliation”. Stat Comput 30, 1167–1182, https://doi.org/10.1007/s11222-020-09930-0.*


source

MinTrace.fit

 MinTrace.fit (S, y_hat, y_insample:Optional[numpy.ndarray]=None,
               y_hat_insample:Optional[numpy.ndarray]=None,
               sigmah:Optional[numpy.ndarray]=None,
               intervals_method:Optional[str]=None,
               num_samples:Optional[int]=None, seed:Optional[int]=None,
               tags:Optional[Dict[str,numpy.ndarray]]=None,
               idx_bottom:Optional[numpy.ndarray]=None)

*MinTrace Fit Method.

Parameters:
S: Summing matrix of size (base, bottom).
y_hat: Forecast values of size (base, horizon).
tags: Each key is a level and each value its S indices.
y_insample: Insample values of size (base, insample_size). Optional for forecast_proportions method.
idx_bottom: Indices corresponding to the bottom level of S, size (bottom).
level: float list 0-100, confidence levels for prediction intervals.
intervals_method: Sampler for prediction intevals, one of normality, bootstrap, permbu.
**sampler_kwargs: Coherent sampler instantiation arguments.

Returns:
self: object, fitted reconciler.*


source

MinTrace.predict

 MinTrace.predict (S:numpy.ndarray, y_hat:numpy.ndarray,
                   level:Optional[List[int]]=None)

*Predict using reconciler.

Predict using fitted mean and probabilistic reconcilers.

Parameters:
S: Summing matrix of size (base, bottom).
y_hat: Forecast values of size (base, horizon).
level: float list 0-100, confidence levels for prediction intervals.

Returns:
y_tilde: Reconciliated predictions.*


source

MinTrace.fit_predict

 MinTrace.fit_predict (S:numpy.ndarray, y_hat:numpy.ndarray,
                       idx_bottom:numpy.ndarray=None,
                       y_insample:Optional[numpy.ndarray]=None,
                       y_hat_insample:Optional[numpy.ndarray]=None,
                       sigmah:Optional[numpy.ndarray]=None,
                       level:Optional[List[int]]=None,
                       intervals_method:Optional[str]=None,
                       num_samples:Optional[int]=None,
                       seed:Optional[int]=None,
                       tags:Optional[Dict[str,numpy.ndarray]]=None)

*MinTrace Reconciliation Method.

Parameters:
S: Summing matrix of size (base, bottom).
y_hat: Forecast values of size (base, horizon).
y_insample: Insample values of size (base, insample_size). Only used by wls_var, mint_cov, mint_shrink
y_hat_insample: Insample fitted values of size (base, insample_size). Only used by wls_var, mint_cov, mint_shrink
idx_bottom: Indices corresponding to the bottom level of S, size (bottom).
level: float list 0-100, confidence levels for prediction intervals.
sampler: Sampler for prediction intevals, one of normality, bootstrap, permbu.

Returns:
y_tilde: Reconciliated y_hat using the MinTrace approach.*


source

MinTrace.sample

 MinTrace.sample (num_samples:int)

*Sample probabilistic coherent distribution.

Generates n samples from a probabilistic coherent distribution. The method uses fitted mean and probabilistic reconcilers, defined by the intervals_method selected during the reconciler’s instantiation. Currently available: normality, bootstrap, permbu.

Parameters:
num_samples: int, number of samples generated from coherent distribution.

Returns:
samples: Coherent samples of size (num_series, horizon, num_samples).*

5. Optimal Combination


source

OptimalCombination

 OptimalCombination (method:str, nonnegative:bool=False,
                     num_threads:int=1)

*Optimal Combination Reconciliation Class.

This reconciliation algorithm was proposed by Hyndman et al. 2011, the method uses generalized least squares estimator using the coherency errors covariance matrix. Consider the covariance of the base forecast Var(ϵh)=Σh\textrm{Var}(\epsilon_{h}) = \Sigma_{h}, the P\mathbf{P} matrix of this method is defined by: P=(SΣhS)1SΣh \mathbf{P} = \left(\mathbf{S}^{\intercal}\Sigma_{h}^{\dagger}\mathbf{S}\right)^{-1}\mathbf{S}^{\intercal}\Sigma^{\dagger}_{h} where Σh\Sigma_{h}^{\dagger} denotes the variance pseudo-inverse. The method was later proven equivalent to MinTrace variants.

Parameters:
method: str, allowed optimal combination methods: ‘ols’, ‘wls_struct’.
nonnegative: bool, reconciled forecasts should be nonnegative?

References:
- Rob J. Hyndman, Roman A. Ahmed, George Athanasopoulos, Han Lin Shang (2010). “Optimal Combination Forecasts for Hierarchical Time Series”..
- Shanika L. Wickramasuriya, George Athanasopoulos and Rob J. Hyndman (2010). “Optimal Combination Forecasts for Hierarchical Time Series”.. - Wickramasuriya, S.L., Turlach, B.A. & Hyndman, R.J. (2020). “Optimal non-negative forecast reconciliation”. Stat Comput 30, 1167–1182, https://doi.org/10.1007/s11222-020-09930-0.*


source

OptimalCombination.fit

 OptimalCombination.fit (S, y_hat,
                         y_insample:Optional[numpy.ndarray]=None,
                         y_hat_insample:Optional[numpy.ndarray]=None,
                         sigmah:Optional[numpy.ndarray]=None,
                         intervals_method:Optional[str]=None,
                         num_samples:Optional[int]=None,
                         seed:Optional[int]=None,
                         tags:Optional[Dict[str,numpy.ndarray]]=None,
                         idx_bottom:Optional[numpy.ndarray]=None)

*MinTrace Fit Method.

Parameters:
S: Summing matrix of size (base, bottom).
y_hat: Forecast values of size (base, horizon).
tags: Each key is a level and each value its S indices.
y_insample: Insample values of size (base, insample_size). Optional for forecast_proportions method.
idx_bottom: Indices corresponding to the bottom level of S, size (bottom).
level: float list 0-100, confidence levels for prediction intervals.
intervals_method: Sampler for prediction intevals, one of normality, bootstrap, permbu.
**sampler_kwargs: Coherent sampler instantiation arguments.

Returns:
self: object, fitted reconciler.*


source

OptimalCombination.predict

 OptimalCombination.predict (S:numpy.ndarray, y_hat:numpy.ndarray,
                             level:Optional[List[int]]=None)

*Predict using reconciler.

Predict using fitted mean and probabilistic reconcilers.

Parameters:
S: Summing matrix of size (base, bottom).
y_hat: Forecast values of size (base, horizon).
level: float list 0-100, confidence levels for prediction intervals.

Returns:
y_tilde: Reconciliated predictions.*


source

OptimalCombination.fit_predict

 OptimalCombination.fit_predict (S:numpy.ndarray, y_hat:numpy.ndarray,
                                 idx_bottom:numpy.ndarray=None,
                                 y_insample:Optional[numpy.ndarray]=None, 
                                 y_hat_insample:Optional[numpy.ndarray]=No
                                 ne, sigmah:Optional[numpy.ndarray]=None,
                                 level:Optional[List[int]]=None,
                                 intervals_method:Optional[str]=None,
                                 num_samples:Optional[int]=None,
                                 seed:Optional[int]=None, tags:Optional[Di
                                 ct[str,numpy.ndarray]]=None)

*MinTrace Reconciliation Method.

Parameters:
S: Summing matrix of size (base, bottom).
y_hat: Forecast values of size (base, horizon).
y_insample: Insample values of size (base, insample_size). Only used by wls_var, mint_cov, mint_shrink
y_hat_insample: Insample fitted values of size (base, insample_size). Only used by wls_var, mint_cov, mint_shrink
idx_bottom: Indices corresponding to the bottom level of S, size (bottom).
level: float list 0-100, confidence levels for prediction intervals.
sampler: Sampler for prediction intevals, one of normality, bootstrap, permbu.

Returns:
y_tilde: Reconciliated y_hat using the MinTrace approach.*


source

OptimalCombination.sample

 OptimalCombination.sample (num_samples:int)

*Sample probabilistic coherent distribution.

Generates n samples from a probabilistic coherent distribution. The method uses fitted mean and probabilistic reconcilers, defined by the intervals_method selected during the reconciler’s instantiation. Currently available: normality, bootstrap, permbu.

Parameters:
num_samples: int, number of samples generated from coherent distribution.

Returns:
samples: Coherent samples of size (num_series, horizon, num_samples).*

6. Emp. Risk Minimization


source

ERM

 ERM (method:str, lambda_reg:float=0.01)

*Optimal Combination Reconciliation Class.

The Empirical Risk Minimization reconciliation strategy relaxes the unbiasedness assumptions from previous reconciliation methods like MinT and optimizes square errors between the reconciled predictions and the validation data to obtain an optimal reconciliation matrix P.

The exact solution for P\mathbf{P} (method='closed') follows the expression: P=(SS)1YY^(Y^Y^)1\mathbf{P}^{*} = \left(\mathbf{S}^{\intercal}\mathbf{S}\right)^{-1}\mathbf{Y}^{\intercal}\hat{\mathbf{Y}}\left(\hat{\mathbf{Y}}\hat{\mathbf{Y}}\right)^{-1}

The alternative Lasso regularized P\mathbf{P} solution (method='reg_bu') is useful when the observations of validation data is limited or the exact solution has low numerical stability. P=argminPYSPY^22+λPPBU1\mathbf{P}^{*} = \text{argmin}_{\mathbf{P}} ||\mathbf{Y}-\mathbf{S} \mathbf{P} \hat{Y} ||^{2}_{2} + \lambda ||\mathbf{P}-\mathbf{P}_{\text{BU}}||_{1}

Parameters:
method: str, one of closed, reg and reg_bu.
lambda_reg: float, l1 regularizer for reg and reg_bu.

References:
- Ben Taieb, S., & Koo, B. (2019). Regularized regression for hierarchical forecasting without unbiasedness conditions. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining KDD ’19 (p. 1337-1347). New York, NY, USA: Association for Computing Machinery..
*


source

ERM.fit

 ERM.fit (S, y_hat, y_insample, y_hat_insample,
          sigmah:Optional[numpy.ndarray]=None,
          intervals_method:Optional[str]=None,
          num_samples:Optional[int]=None, seed:Optional[int]=None,
          tags:Optional[Dict[str,numpy.ndarray]]=None,
          idx_bottom:Optional[numpy.ndarray]=None)

*ERM Fit Method.

Parameters:
S: Summing matrix of size (base, bottom).
y_hat: Forecast values of size (base, horizon).
y_insample: Train values of size (base, insample_size).
y_hat_insample: Insample train predictions of size (base, insample_size).
idx_bottom: Indices corresponding to the bottom level of S, size (bottom).
level: float list 0-100, confidence levels for prediction intervals.
intervals_method: Sampler for prediction intevals, one of normality, bootstrap, permbu.
**sampler_kwargs: Coherent sampler instantiation arguments.

Returns:
self: object, fitted reconciler.*


source

ERM.predict

 ERM.predict (S:numpy.ndarray, y_hat:numpy.ndarray,
              level:Optional[List[int]]=None)

*Predict using reconciler.

Predict using fitted mean and probabilistic reconcilers.

Parameters:
S: Summing matrix of size (base, bottom).
y_hat: Forecast values of size (base, horizon).
level: float list 0-100, confidence levels for prediction intervals.

Returns:
y_tilde: Reconciliated predictions.*


source

ERM.fit_predict

 ERM.fit_predict (S:numpy.ndarray, y_hat:numpy.ndarray,
                  idx_bottom:numpy.ndarray=None,
                  y_insample:Optional[numpy.ndarray]=None,
                  y_hat_insample:Optional[numpy.ndarray]=None,
                  sigmah:Optional[numpy.ndarray]=None,
                  level:Optional[List[int]]=None,
                  intervals_method:Optional[str]=None,
                  num_samples:Optional[int]=None, seed:Optional[int]=None,
                  tags:Optional[Dict[str,numpy.ndarray]]=None)

*ERM Reconciliation Method.

Parameters:
S: Summing matrix of size (base, bottom).
y_hat: Forecast values of size (base, horizon).
y_insample: Train values of size (base, insample_size).
y_hat_insample: Insample train predictions of size (base, insample_size).
idx_bottom: Indices corresponding to the bottom level of S, size (bottom).
level: float list 0-100, confidence levels for prediction intervals.
intervals_method: Sampler for prediction intevals, one of normality, bootstrap, permbu.

Returns:
y_tilde: Reconciliated y_hat using the ERM approach.*


source

ERM.sample

 ERM.sample (num_samples:int)

*Sample probabilistic coherent distribution.

Generates n samples from a probabilistic coherent distribution. The method uses fitted mean and probabilistic reconcilers, defined by the intervals_method selected during the reconciler’s instantiation. Currently available: normality, bootstrap, permbu.

Parameters:
num_samples: int, number of samples generated from coherent distribution.

Returns:
samples: Coherent samples of size (num_series, horizon, num_samples).*

reconciler_args = dict(S=S, 
                       y_hat=y_hat_base,
                       y_insample=y_base,
                       y_hat_insample=y_hat_base_insample,
                       sigmah=sigmah,
                       level=[80, 90],
                       intervals_method='normality',
                       num_samples=200,
                       seed=0,
                       tags=tags,
                       idx_bottom=idx_bottom
                       )

References

General Reconciliation

Optimal Reconciliation

Hierarchical Probabilistic Coherent Predictions