Open In Colab

Introduction

Some time series are generated from very low frequency data. These data generally exhibit multiple seasonalities. For example, hourly data may exhibit repeated patterns every hour (every 24 observations) or every day (every 24 * 7, hours per day, observations). This is the case for electricity load. Electricity load may vary hourly, e.g., during the evenings electricity consumption may be expected to increase. But also, the electricity load varies by week. Perhaps on weekends there is an increase in electrical activity.

In this example we will show how to model the two seasonalities of the time series to generate accurate forecasts in a short time. We will use hourly PJM electricity load data. The original data can be found here.

Libraries

In this example we will use the following libraries:

  • StatsForecast. Lightning ⚡️ fast forecasting with statistical and econometric models. Includes the MSTL model for multiple seasonalities.
  • DatasetsForecast. Used to evaluate the performance of the forecasts.
  • Prophet. Benchmark model developed by Facebook.
  • NeuralProphet. Deep Learning version of Prophet. Used as benchark.
!pip install statsforecast
!pip install datasetsforecast
!pip install prophet
!pip install "neuralprophet[live]"

Forecast using Multiple Seasonalities

Electricity Load Data

According to the dataset’s page,

PJM Interconnection LLC (PJM) is a regional transmission organization (RTO) in the United States. It is part of the Eastern Interconnection grid operating an electric transmission system serving all or parts of Delaware, Illinois, Indiana, Kentucky, Maryland, Michigan, New Jersey, North Carolina, Ohio, Pennsylvania, Tennessee, Virginia, West Virginia, and the District of Columbia. The hourly power consumption data comes from PJM’s website and are in megawatts (MW).

Let’s take a look to the data.

import matplotlib.pyplot as plt
import numpy as np
import pandas as pd

pd.plotting.register_matplotlib_converters()
plt.rc("figure", figsize=(10, 8))
plt.rc("font", size=10)
df = pd.read_csv('https://raw.githubusercontent.com/panambY/Hourly_Energy_Consumption/master/data/PJM_Load_hourly.csv')
df.columns = ['ds', 'y']
df.insert(0, 'unique_id', 'PJM_Load_hourly')
df['ds'] = pd.to_datetime(df['ds'])
df = df.sort_values(['unique_id', 'ds']).reset_index(drop=True)
df.tail()
unique_iddsy
32891PJM_Load_hourly2001-12-31 20:00:0036392.0
32892PJM_Load_hourly2001-12-31 21:00:0035082.0
32893PJM_Load_hourly2001-12-31 22:00:0033890.0
32894PJM_Load_hourly2001-12-31 23:00:0032590.0
32895PJM_Load_hourly2002-01-01 00:00:0031569.0
df.plot(x='ds', y='y')

We clearly observe that the time series exhibits seasonal patterns. Moreover, the time series contains 32,896 observations, so it is necessary to use very computationally efficient methods to display them in production.

MSTL model

The MSTL (Multiple Seasonal-Trend decomposition using LOESS) model, originally developed by Kasun Bandara, Rob J Hyndman and Christoph Bergmeir, decomposes the time series in multiple seasonalities using a Local Polynomial Regression (LOESS). Then it forecasts the trend using a custom non-seasonal model and each seasonality using a SeasonalNaive model.

StatsForecast contains a fast implementation of the MSTL model. Also, the decomposition of the time series can be calculated.

from statsforecast import StatsForecast
from statsforecast.models import MSTL, AutoARIMA, SeasonalNaive
from statsforecast.utils import AirPassengers as ap

First we must define the model parameters. As mentioned before, the electricity load presents seasonalities every 24 hours (Hourly) and every 24 * 7 (Daily) hours. Therefore, we will use [24, 24 * 7] as the seasonalities that the MSTL model receives. We must also specify the manner in which the trend will be forecasted. In this case we will use the AutoARIMA model.

mstl = MSTL(
    season_length=[24, 24 * 7], # seasonalities of the time series 
    trend_forecaster=AutoARIMA() # model used to forecast trend
)

Once the model is instantiated, we have to instantiate the StatsForecast class to create forecasts.

sf = StatsForecast(
    models=[mstl], # model used to fit each time series 
    freq='H', # frequency of the data
)

Fit the model

Afer that, we just have to use the fit method to fit each model to each time series.

sf = sf.fit(df=df)

Decompose the time series in multiple seasonalities

Once the model is fitted, we can access the decomposition using the fitted_ attribute of StatsForecast. This attribute stores all relevant information of the fitted models for each of the time series.

In this case we are fitting a single model for a single time series, so by accessing the fitted_ location [0, 0] we will find the relevant information of our model. The MSTL class generates a model_ attribute that contains the way the series was decomposed.

sf.fitted_[0, 0].model_
datatrendseasonal24seasonal168remainder
022259.026183.898892-5215.124554609.000432681.225229
121244.026181.599305-6255.673234603.823918714.250011
220651.026179.294886-6905.329895636.820423740.214587
320421.026176.985472-7073.420118615.825999701.608647
420713.026174.670877-7062.395760991.521912609.202971
3289136392.033123.5527274387.149171-488.177882-630.524015
3289235082.033148.2425753479.852929-682.928737-863.166767
3289333890.033172.9261652307.808829-650.566775-940.168219
3289432590.033197.603322748.587723-555.177849-801.013195
3289531569.033222.273902-967.124123-265.895357-420.254422

Let’s look graphically at the different components of the time series.

sf.fitted_[0, 0].model_.tail(24 * 28).plot(subplots=True, grid=True)
plt.tight_layout()
plt.show()

We observe that there is a clear trend towards the high (orange line). This component would be predicted with the AutoARIMA model. We can also observe that every 24 hours and every 24 * 7 hours there is a very well defined pattern. These two components will be forecast separately using a SeasonalNaive model.

Produce forecasts

To generate forecasts we only have to use the predict method specifying the forecast horizon (h). In addition, to calculate prediction intervals associated to the forecasts, we can include the parameter level that receives a list of levels of the prediction intervals we want to build. In this case we will only calculate the 90% forecast interval (level=[90]).

forecasts = sf.predict(h=24, level=[90])
forecasts.head()
dsMSTLMSTL-lo-90MSTL-hi-90
unique_id
PJM_Load_hourly2002-01-01 01:00:0029956.74414129585.18750030328.298828
PJM_Load_hourly2002-01-01 02:00:0029057.69140628407.49804729707.884766
PJM_Load_hourly2002-01-01 03:00:0028654.69921927767.10156229542.298828
PJM_Load_hourly2002-01-01 04:00:0028499.00976627407.64062529590.378906
PJM_Load_hourly2002-01-01 05:00:0028821.71679727552.23632830091.197266

Let’s look at our forecasts graphically.

_, ax = plt.subplots(1, 1, figsize = (20, 7))
df_plot = pd.concat([df, forecasts]).set_index('ds').tail(24 * 7)
df_plot[['y', 'MSTL']].plot(ax=ax, linewidth=2)
ax.fill_between(df_plot.index, 
                df_plot['MSTL-lo-90'], 
                df_plot['MSTL-hi-90'],
                alpha=.35,
                color='orange',
                label='MSTL-level-90')
ax.set_title('PJM Load Hourly', fontsize=22)
ax.set_ylabel('Electricity Load', fontsize=20)
ax.set_xlabel('Timestamp [t]', fontsize=20)
ax.legend(prop={'size': 15})
ax.grid()

In the next section we will plot different models so it is convenient to reuse the previous code with the following function.

def plot_forecasts(y_hist, y_true, y_pred, models):
    _, ax = plt.subplots(1, 1, figsize = (20, 7))
    y_true = y_true.merge(y_pred, how='left', on=['unique_id', 'ds'])
    df_plot = pd.concat([y_hist, y_true]).set_index('ds').tail(24 * 7)
    df_plot[['y'] + models].plot(ax=ax, linewidth=2)
    colors = ['orange', 'green', 'red']
    for model, color in zip(models, colors):
        ax.fill_between(df_plot.index, 
                        df_plot[f'{model}-lo-90'], 
                        df_plot[f'{model}-hi-90'],
                        alpha=.35,
                        color=color,
                        label=f'{model}-level-90')
    ax.set_title('PJM Load Hourly', fontsize=22)
    ax.set_ylabel('Electricity Load', fontsize=20)
    ax.set_xlabel('Timestamp [t]', fontsize=20)
    ax.legend(prop={'size': 15})
    ax.grid()

Performance of the MSTL model

Split Train/Test sets

To validate the accuracy of the MSTL model, we will show its performance on unseen data. We will use a classical time series technique that consists of dividing the data into a training set and a test set. We will leave the last 24 observations (the last day) as the test set. So the model will train on 32,872 observations.

df_test = df.tail(24)
df_train = df.drop(df_test.index)

MSTL model

In addition to the MSTL model, we will include the SeasonalNaive model as a benchmark to validate the added value of the MSTL model. Including StatsForecast models is as simple as adding them to the list of models to be fitted.

sf = StatsForecast(
    models=[mstl, SeasonalNaive(season_length=24)], # add SeasonalNaive model to the list
    freq='H'
)

To measure the fitting time we will use the time module.

from time import time

To retrieve the forecasts of the test set we only have to do fit and predict as before.

init = time()
sf = sf.fit(df=df_train)
forecasts_test = sf.predict(h=len(df_test), level=[90])
end = time()
forecasts_test.head()
dsMSTLMSTL-lo-90MSTL-hi-90SeasonalNaiveSeasonalNaive-lo-90SeasonalNaive-hi-90
unique_id
PJM_Load_hourly2001-12-31 01:00:0028345.21289127973.57226628716.85351628326.023468.69335933183.304688
PJM_Load_hourly2001-12-31 02:00:0027567.45507826917.08593828217.82421927362.022504.69335932219.306641
PJM_Load_hourly2001-12-31 03:00:0027260.00195326372.13867228147.86523427108.022250.69335931965.306641
PJM_Load_hourly2001-12-31 04:00:0027328.12500026236.41015628419.83984426865.022007.69335931722.306641
PJM_Load_hourly2001-12-31 05:00:0027640.67382826370.77343828910.57226626808.021950.69335931665.306641
time_mstl = (end - init) / 60
print(f'MSTL Time: {time_mstl:.2f} minutes')
MSTL Time: 0.22 minutes

Then we were able to generate forecasts for the next 24 hours. Now let’s look at the graphical comparison of the forecasts with the actual values.

plot_forecasts(df_train, df_test, forecasts_test, models=['MSTL', 'SeasonalNaive'])

Let’s look at those produced only by MSTL.

plot_forecasts(df_train, df_test, forecasts_test, models=['MSTL'])

We note that MSTL produces very accurate forecasts that follow the behavior of the time series. Now let us calculate numerically the accuracy of the model. We will use the following metrics: MAE, MAPE, MASE, RMSE, SMAPE.

from datasetsforecast.losses import (
    mae, mape, mase, rmse, smape
)
def evaluate_performace(y_hist, y_true, y_pred, models):
    y_true = y_true.merge(y_pred, how='left', on=['unique_id', 'ds'])
    evaluation = {}
    for model in models:
        evaluation[model] = {}
        for metric in [mase, mae, mape, rmse, smape]:
            metric_name = metric.__name__
            if metric_name == 'mase':
                evaluation[model][metric_name] = metric(y_true['y'].values, 
                                                 y_true[model].values, 
                                                 y_hist['y'].values, seasonality=24)
            else:
                evaluation[model][metric_name] = metric(y_true['y'].values, y_true[model].values)
    return pd.DataFrame(evaluation).T
evaluate_performace(df_train, df_test, forecasts_test, models=['MSTL', 'SeasonalNaive'])
masemaemapermsesmape
MSTL0.341926709.9320482.182804892.8880122.162832
SeasonalNaive0.8946531857.5416675.6481902201.3841015.868604

We observe that MSTL has an improvement of about 60% over the SeasonalNaive method in the test set measured in MASE.

Comparison with Prophet

One of the most widely used models for time series forecasting is Prophet. This model is known for its ability to model different seasonalities (weekly, daily yearly). We will use this model as a benchmark to see if the MSTL adds value for this time series.

from prophet import Prophet

# create prophet model
prophet = Prophet(interval_width=0.9)
init = time()
prophet.fit(df_train)
# produce forecasts
future = prophet.make_future_dataframe(periods=len(df_test), freq='H', include_history=False)
forecast_prophet = prophet.predict(future)
end = time()
# data wrangling
forecast_prophet = forecast_prophet[['ds', 'yhat', 'yhat_lower', 'yhat_upper']]
forecast_prophet.columns = ['ds', 'Prophet', 'Prophet-lo-90', 'Prophet-hi-90']
forecast_prophet.insert(0, 'unique_id', 'PJM_Load_hourly')
forecast_prophet.head()
23:41:40 - cmdstanpy - INFO - Chain [1] start processing
23:41:56 - cmdstanpy - INFO - Chain [1] done processing
unique_iddsProphetProphet-lo-90Prophet-hi-90
0PJM_Load_hourly2001-12-31 01:00:0025317.65838620757.91953930313.561582
1PJM_Load_hourly2001-12-31 02:00:0024024.18807719304.09393928667.495805
2PJM_Load_hourly2001-12-31 03:00:0023348.30682418608.98282528497.334752
3PJM_Load_hourly2001-12-31 04:00:0023356.15011318721.14227028136.888630
4PJM_Load_hourly2001-12-31 05:00:0024130.86121719896.18845528970.202276
time_prophet = (end - init) / 60
print(f'Prophet Time: {time_prophet:.2f} minutes')
Prophet Time: 0.30 minutes
times = pd.DataFrame({'model': ['MSTL', 'Prophet'], 'time (mins)': [time_mstl, time_prophet]})
times
modeltime (mins)
0MSTL0.217266
1Prophet0.301172

We observe that the time required for Prophet to perform the fit and predict pipeline is greater than MSTL. Let’s look at the forecasts produced by Prophet.

forecasts_test = forecasts_test.merge(forecast_prophet, how='left', on=['unique_id', 'ds'])
plot_forecasts(df_train, df_test, forecasts_test, models=['MSTL', 'SeasonalNaive', 'Prophet'])

We note that Prophet is able to capture the overall behavior of the time series. However, in some cases it produces forecasts well below the actual value. It also does not correctly adjust the valleys.

evaluate_performace(df_train, df_test, forecasts_test, models=['MSTL', 'Prophet', 'SeasonalNaive'])
masemaemapermsesmape
MSTL0.341926709.9320482.182804892.8880122.162832
Prophet1.0947682273.0363737.3432922709.4003417.688665
SeasonalNaive0.8946531857.5416675.6481902201.3841015.868604

In terms of accuracy, Prophet is not able to produce better forecasts than the SeasonalNaive model, however, the MSTL model improves Prophet’s forecasts by 69% (MASE).

Comparison with NeuralProphet

NeuralProphet is the version of Prophet using deep learning. This model is also capable of handling different seasonalities so we will also use it as a benchmark.

from neuralprophet import NeuralProphet

neuralprophet = NeuralProphet(quantiles=[0.05, 0.95])
init = time()
neuralprophet.fit(df_train.drop(columns='unique_id'))
future = neuralprophet.make_future_dataframe(df=df_train.drop(columns='unique_id'), periods=len(df_test))
forecast_np = neuralprophet.predict(future)
end = time()
forecast_np = forecast_np[['ds', 'yhat1', 'yhat1 5.0%', 'yhat1 95.0%']]
forecast_np.columns = ['ds', 'NeuralProphet', 'NeuralProphet-lo-90', 'NeuralProphet-hi-90']
forecast_np.insert(0, 'unique_id', 'PJM_Load_hourly')
forecast_np.head()
WARNING - (NP.forecaster.fit) - When Global modeling with local normalization, metrics are displayed in normalized scale.
INFO - (NP.df_utils._infer_frequency) - Major frequency H corresponds to 99.973% of the data.
INFO - (NP.df_utils._infer_frequency) - Dataframe freq automatically defined as H
INFO - (NP.config.init_data_params) - Setting normalization to global as only one dataframe provided for training.
INFO - (NP.config.set_auto_batch_epoch) - Auto-set batch_size to 64
INFO - (NP.config.set_auto_batch_epoch) - Auto-set epochs to 76
INFO - (NP.df_utils._infer_frequency) - Major frequency H corresponds to 99.973% of the data.
INFO - (NP.df_utils._infer_frequency) - Defined frequency is equal to major frequency - H
INFO - (NP.df_utils.return_df_in_original_format) - Returning df with no ID column
INFO - (NP.df_utils._infer_frequency) - Major frequency H corresponds to 95.833% of the data.
INFO - (NP.df_utils._infer_frequency) - Defined frequency is equal to major frequency - H
INFO - (NP.df_utils._infer_frequency) - Major frequency H corresponds to 95.833% of the data.
INFO - (NP.df_utils._infer_frequency) - Defined frequency is equal to major frequency - H
INFO - (NP.df_utils.return_df_in_original_format) - Returning df with no ID column
Finding best initial lr:   0%|          | 0/262 [00:00<?, ?it/s]
Training: 0it [00:00, ?it/s]
Predicting: 514it [00:00, ?it/s]
unique_iddsNeuralProphetNeuralProphet-lo-90NeuralProphet-hi-90
0PJM_Load_hourly2001-12-31 01:00:0025019.89257822296.67578127408.724609
1PJM_Load_hourly2001-12-31 02:00:0024128.81640621439.85156226551.615234
2PJM_Load_hourly2001-12-31 03:00:0023736.67968820961.97851626289.349609
3PJM_Load_hourly2001-12-31 04:00:0023476.74414120731.61914126050.443359
4PJM_Load_hourly2001-12-31 05:00:0023899.16210921217.50390626449.603516
time_np = (end - init) / 60
print(f'Prophet Time: {time_np:.2f} minutes')
Prophet Time: 2.95 minutes
times = times.append({'model': 'NeuralProphet', 'time (mins)': time_np}, ignore_index=True)
times
modeltime (mins)
0MSTL0.217266
1Prophet0.301172
2NeuralProphet2.946358

We observe that NeuralProphet requires a longer processing time than Prophet and MSTL.

forecasts_test = forecasts_test.merge(forecast_np, how='left', on=['unique_id', 'ds'])
plot_forecasts(df_train, df_test, forecasts_test, models=['MSTL', 'NeuralProphet', 'Prophet'])

The forecasts graph shows that NeuralProphet generates very similar results to Prophet, as expected.

evaluate_performace(df_train, df_test, forecasts_test, models=['MSTL', 'NeuralProphet', 'Prophet', 'SeasonalNaive'])
masemaemapermsesmape
MSTL0.341926709.9320482.182804892.8880122.162832
NeuralProphet1.0849152252.5786137.2802022671.1457307.615492
Prophet1.0947682273.0363737.3432922709.4003417.688665
SeasonalNaive0.8946531857.5416675.6481902201.3841015.868604

With respect to numerical evaluation, NeuralProphet improves the results of Prophet, as expected, however, MSTL improves over NeuralProphet’s foreacasts by 68% (MASE).

Important

The performance of NeuralProphet can be improved using hyperparameter optimization, which can increase the fitting time significantly. In this example we show its performance with the default version.

Conclusion

In this post we introduced MSTL, a model originally developed by Kasun Bandara, Rob Hyndman and Christoph Bergmeir capable of handling time series with multiple seasonalities. We also showed that for the PJM electricity load time series offers better performance in time and accuracy than the Prophet and NeuralProphet models.

References