M4 meta information
source
Other
Other (seasonality:int=1, horizon:int=8, freq:str='D', name:str='Other',
n_ts:int=5000, included_groups:Tuple=('Weekly', 'Daily',
'Hourly'))
source
Hourly
Hourly (seasonality:int=24, horizon:int=48, freq:str='H',
name:str='Hourly', n_ts:int=414)
source
Daily
Daily (seasonality:int=1, horizon:int=14, freq:str='D', name:str='Daily',
n_ts:int=4227)
source
Weekly
Weekly (seasonality:int=1, horizon:int=13, freq:str='W',
name:str='Weekly', n_ts:int=359)
source
Monthly
Monthly (seasonality:int=12, horizon:int=18, freq:str='M',
name:str='Monthly', n_ts:int=48000)
source
Quarterly
Quarterly (seasonality:int=4, horizon:int=8, freq:str='Q',
name:str='Quarterly', n_ts:int=24000)
source
Yearly
Yearly (seasonality:int=1, horizon:int=6, freq:str='Y',
name:str='Yearly', n_ts:int=23000)
Download data class
source
M4 (source_url:str='https://raw.githubusercontent.com/Mcompetitions/M4-
methods/master/Dataset/', naive2_forecast_url:str='https://github.com
/Nixtla/m4-forecasts/raw/master/forecasts/submission-Naive2.zip')
group = 'Hourly'
await M4.async_download('data', group=group)
df, *_ = M4.load(directory='data', group=group)
n_series = len(np.unique(df.unique_id.values))
display_str = f'Group: {group} '
display_str += f'n_series: {n_series}'
print(display_str)
Evaluation class
source
M4Evaluation
Initialize self. See help(type(self)) for accurate signature.
URL-based evaluation
The method evaluate
from the class
M4Evaluation
can receive a url of a benchmark uploaded to the M4
competiton.
The results compared to the on-the-fly evaluation were obtained from the
official
evaluation.
from fastcore.test import test_close
esrnn_url = 'https://github.com/Nixtla/m4-forecasts/raw/master/forecasts/submission-118.zip'
esrnn_evaluation = M4Evaluation.evaluate('data', 'Hourly', esrnn_url)
test_close(esrnn_evaluation['SMAPE'].item(), 9.328, eps=1e-3)
test_close(esrnn_evaluation['MASE'].item(), 0.893, eps=1e-3)
test_close(esrnn_evaluation['OWA'].item(), 0.440, eps=1e-3)
esrnn_evaluation
Numpy-based evaluation
Also the method evaluate
can recevie a numpy array of forecasts.
fforma_url = 'https://github.com/Nixtla/m4-forecasts/raw/master/forecasts/submission-245.zip'
fforma_forecasts = M4Evaluation.load_benchmark('data', 'Hourly', fforma_url)
fforma_evaluation = M4Evaluation.evaluate('data', 'Hourly', fforma_forecasts)
test_close(fforma_evaluation['SMAPE'].item(), 11.506, eps=1e-3)
test_close(fforma_evaluation['MASE'].item(), 0.819, eps=1e-3)
test_close(fforma_evaluation['OWA'].item(), 0.484, eps=1e-3)
fforma_evaluation