source

Weather

 Weather (freq:str='10M', name:str='weather', n_ts:int=21,
          test_size:int=10539, val_size:int=5270, horizons:Tuple[int]=(96,
          192, 336, 720))

This Weather dataset contains the 2020 year of 21 meteorological measurements recorded every 10 minutes from the Weather Station of the Max Planck Biogeochemistry Institute in Jena, Germany.

Reference: Wu, H., Xu, J., Wang, J., and Long, M. Autoformer: Decomposition Transformers with auto-correlation for long-term series forecasting. NeurIPS 2021. https://arxiv.org/abs/2106.13008.


source

TrafficL

 TrafficL (freq:str='H', name:str='traffic', n_ts:int=862,
           test_size:int=3508, val_size:int=1756, horizons:Tuple[int]=(96,
           192, 336, 720))

This large Traffic dataset was collected by the California Department of Transportation, it reports road hourly occupancy rates of 862 sensors, from January 2015 to December 2016.

Reference: Lai, G., Chang, W., Yang, Y., and Liu, H. Modeling Long and Short-Term Temporal Patterns with Deep Neural Networks. SIGIR 2018. http://arxiv.org/abs/1703.07015.

Wu, H., Xu, J., Wang, J., and Long, M. Autoformer: Decomposition Transformers with auto-correlation for long-term series forecasting. NeurIPS 2021. https://arxiv.org/abs/2106.13008.


source

ECL

 ECL (freq:str='15T', name:str='ECL', n_ts:int=321, n_time:int=26304,
      test_size:int=5260, val_size:int=2632, horizons:Tuple[int]=(96, 192,
      336, 720))

The Electricity dataset reports the fifteen minute electricity consumption (KWh) of 321 customers from 2012 to 2014. For comparability, we aggregate it hourly.

Reference: Li, S et al. Enhancing the locality and breaking the memory bottleneck of Transformer on time series forecasting. NeurIPS 2019. http://arxiv.org/abs/1907.00235.


source

ETTm2

 ETTm2 (freq:str='15T', name:str='ETTm2', n_ts:int=7, n_time:int=57600,
        test_size:int=11520, val_size:int=11520, horizons:Tuple[int]=(96,
        192, 336, 720))

The ETTm2 dataset monitors an electricity transformer from a region of a province of China including oil temperature and variants of load (such as high useful load and high useless load) from July 2016 to July 2018 at a fifteen minute frequency.

Reference: Zhou, et al. Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting. AAAI 2021. https://arxiv.org/abs/2012.07436


source

ETTm1

 ETTm1 (freq:str='15T', name:str='ETTm1', n_ts:int=7, n_time:int=57600,
        test_size:int=11520, val_size:int=11520, horizons:Tuple[int]=(96,
        192, 336, 720))

The ETTm1 dataset monitors an electricity transformer from a region of a province of China including oil temperature and variants of load (such as high useful load and high useless load) from July 2016 to July 2018 at a fifteen minute frequency.


source

ETTh2

 ETTh2 (freq:str='H', name:str='ETTh2', n_ts:int=7, n_time:int=14400,
        test_size:int=2880, val_size:int=2880, horizons:Tuple[int]=(96,
        192, 336, 720))

The ETTh2 dataset monitors an electricity transformer from a region of a province of China including oil temperature and variants of load (such as high useful load and high useless load) from July 2016 to July 2018 at an hourly frequency.


source

ETTh1

 ETTh1 (freq:str='H', name:str='ETTh1', n_ts:int=7, n_time:int=14400,
        test_size:int=2880, val_size:int=2880, horizons:Tuple[int]=(96,
        192, 336, 720))

The ETTh1 dataset monitors an electricity transformer from a region of a province of China including oil temperature and variants of load (such as high useful load and high useless load) from July 2016 to July 2018 at an hourly frequency.


source

LongHorizon2

 LongHorizon2 (source_url:str='https://www.dropbox.com/s/rlc1qmprpvuqrsv/a
               ll_six_datasets.zip?dl=1')

This Long-Horizon datasets wrapper class, provides with utility to download and wrangle the following datasets:
ETT, ECL, Exchange, Traffic, ILI and Weather.

  • Each set is normalized with the train data mean and standard deviation.
  • Datasets are partitioned into train, validation and test splits.
  • For all datasets: 70%, 10%, and 20% of observations are train, validation, test, except ETT that uses 20% validation.