API Reference
Hierarchical Evaluation
To assist the evaluation of hierarchical forecasting systems, we make
available an
evaluate
function that can be used in combination with loss functions from
utilsforecast.losses
.
source
evaluate
Evaluate hierarchical forecast using different metrics.
Type | Default | Details | |
---|---|---|---|
df | FrameT | Forecasts to evaluate. Must have id_col , time_col , target_col and models’ predictions. | |
metrics | list | Functions with arguments df , models , id_col , target_col and optionally train_df . | |
tags | dict | Each key is a level in the hierarchy and its value contains tags associated to that level. | |
models | Optional | None | Names of the models to evaluate. If None will use every column in the dataframe after removing id, time and target. |
train_df | Optional | None | Training set. Used to evaluate metrics such as mase . |
level | Optional | None | Prediction interval levels. Used to compute losses that rely on quantiles. |
id_col | str | unique_id | Column that identifies each serie. |
time_col | str | ds | Column that identifies each timestep, its values can be timestamps or integers. |
target_col | str | y | Column that contains the target. |
agg_fn | Optional | mean | Statistic to compute on the scores by id to reduce them to a single number. |
benchmark | Optional | None | If passed, evaluators are scaled by the error of this benchmark model. |
Returns | FrameT | Metrics with one row per (id, metric) combination and one column per model. If agg_fn is not None , there is only one row per metric. |
Example
References
- Gneiting, Tilmann, and Adrian E. Raftery. (2007). “Strictly proper scoring rules, prediction and estimation”. Journal of the American Statistical Association.
- Gneiting, Tilmann. (2011). “Quantiles as optimal point forecasts”. International Journal of Forecasting.
- Spyros Makridakis, Evangelos Spiliotis, Vassilios Assimakopoulos, Zhi Chen, Anil Gaba, Ilia Tsetlin, Robert L. Winkler. (2022). “The M5 uncertainty competition: Results, findings and conclusions”. International Journal of Forecasting.
- Anastasios Panagiotelis, Puwasala Gamakumara, George Athanasopoulos, Rob J. Hyndman. (2022). “Probabilistic forecast reconciliation: Properties, evaluation and score optimisation”. European Journal of Operational Research.
- Syama Sundar Rangapuram, Lucien D Werner, Konstantinos Benidis, Pedro Mercado, Jan Gasthaus, Tim Januschowski. (2021). “End-to-End Learning of Coherent Probabilistic Forecasts for Hierarchical Time Series”. Proceedings of the 38th International Conference on Machine Learning (ICML).
- Kin G. Olivares, O. Nganba Meetei, Ruijun Ma, Rohan Reddy, Mengfei Cao, Lee Dicker (2022). “Probabilistic Hierarchical Forecasting with Deep Poisson Mixtures”. Submitted to the International Journal Forecasting, Working paper available at arxiv.
- Makridakis, S., Spiliotis E., and Assimakopoulos V. (2022). “M5 Accuracy Competition: Results, Findings, and Conclusions.”, International Journal of Forecasting, Volume 38, Issue 4.