M5
M5.download
| Name | Type | Description | Default |
|---|---|---|---|
directory | str | Directory path to download dataset. | required |
M5.load
Returns:
| Type | Description |
|---|---|
Tuple[DataFrame, DataFrame, DataFrame] | Tuple[pd.DataFrame, pd.DataFrame, pd.DataFrame]: Target time series with columns [‘unique_id’, ‘ds’, ‘y’], Exogenous time series with columns [‘unique_id’, ‘ds’, ‘y’], Static exogenous variables with columns [‘unique_id’, ‘ds’] and static variables. |
M5.source_url
Evaluation class
M5Evaluation
M5Evaluation.aggregate_levels
Returns:
| Type | Description |
|---|---|
DataFrame | pd.DataFrame: Aggregated forecasts as wide pandas dataframe with columns [‘unique_id’]. |
M5Evaluation.evaluate
| Name | Type | Description | Default |
|---|---|---|---|
directory | str | Directory where data will be downloaded. | required |
validation | bool | Wheter perform validation evaluation. Default False, return test evaluation. | False |
y_hat | Union[DataFrame, str] | Forecasts as wide pandas dataframe with columns [‘unique_id’] and forecasts or benchmark url from https://github.com/Nixtla/m5-forecasts/tree/main/forecasts. | required |
| Type | Description |
|---|---|
DataFrame | pd.DataFrame: DataFrame with columns OWA, SMAPE, MASE and group as index. |
M5Evaluation.levels
M5Evaluation.load_benchmark
| Name | Type | Description | Default |
|---|---|---|---|
directory | str | Directory where data will be downloaded. | required |
source_url | str | Optional benchmark url obtained from https://github.com/Nixtla/m5-forecasts/tree/master/forecasts. If None returns the M5 winner. | None |
validation | bool | Wheter return validation forecasts. Default False, return test forecasts. | False |
| Type | Description |
|---|---|
ndarray | np.ndarray: Numpy array of shape (n_series, horizon). |
URL-based evaluation
The methodevaluate from the class
M5Evaluation
can receive a url of a submission to the M5
competiton.
The results compared to the on-the-fly evaluation were obtained from the
official
evaluation.
Pandas-based evaluation
Also the methodevaluate can recevie a pandas DataFrame of forecasts.
Validation evaluation
You can also evaluate the official validation set.Kaggle-Competition-M5 References
The evaluation metric of the Favorita Kaggle competition was the normalized weighted root mean squared logarithmic error (NWRMSLE). Perishable items have a score weight of 1.25; otherwise, the weight is 1.0.| Kaggle Competition Forecasting Methods | 16D ahead NWRMSLE |
|---|---|
| LGBM [1] | 0.5091 |
| Seq2Seq WaveNet [2] | 0.5129 |
- Corporación Favorita. Corporación favorita grocery sales forecasting. Kaggle Competition Leaderboard, 2018.
- Glib Kechyn, Lucius Yu, Yangguang Zang, and Svyatoslav Kechyn. Sales forecasting using wavenet within the framework of the Favorita Kaggle competition. Computing Research Repository, abs/1803.04037, 2018.

