Introduction

Predicting peaks in different markets is useful. In the electricity market, consuming electricity at peak demand is penalized with higher tarifs. When an individual or company consumes electricity when its most demanded, regulators calls that a coincident peak (CP).

In the Texas electricity market (ERCOT), the peak is the monthly 15-minute interval when the ERCOT Grid is at a point of highest capacity. The peak is caused by all consumers’ combined demand on the electrical grid. The coincident peak demand is an important factor used by ERCOT to determine final electricity consumption bills. ERCOT registers the CP demand of each client for 4 months, between June and September, and uses this to adjust electricity prices. Clients can therefore save on electricity bills by reducing the coincident peak demand.

In this example we will train a LightGBM model on historic load data to forecast day-ahead peaks on September 2022. Multiple seasonality is traditionally present in low sampled electricity data. Demand exhibits daily and weekly seasonality, with clear patterns for specific hours of the day such as 6:00pm vs 3:00am or for specific days such as Sunday vs Friday.

First, we will load ERCOT historic demand, then we will use the MLForecast.cross_validation method to fit the LightGBM model and forecast daily load during September. Finally, we show how to use the forecasts to detect the coincident peak.

Outline

  1. Install libraries
  2. Load and explore the data
  3. Fit LightGBM model and forecast
  4. Peak detection

Tip

You can use Colab to run this Notebook interactively

Open In Colab

Libraries

We assume you have MLForecast already installed. Check this guide for instructions on how to install MLForecast.

Install the necessary packages using pip install mlforecast.

Also we have to install LightGBM using pip install lightgbm.

Load Data

The input to MLForecast is always a data frame in long format with three columns: unique_id, ds and y:

  • The unique_id (string, int or category) represents an identifier for the series.

  • The ds (datestamp or int) column should be either an integer indexing time or a datestamp ideally like YYYY-MM-DD for a date or YYYY-MM-DD HH:MM:SS for a timestamp.

  • The y (numeric) represents the measurement we wish to forecast. We will rename the

First, read the 2022 historic total demand of the ERCOT market. We processed the original data (available here), by adding the missing hour due to daylight saving time, parsing the date to datetime format, and filtering columns of interest.

import numpy as np
import pandas as pd
from utilsforecast.plotting import plot_series
# Load data
Y_df = pd.read_csv('https://datasets-nixtla.s3.amazonaws.com/ERCOT-clean.csv', parse_dates=['ds'])
Y_df = Y_df.query("ds >= '2022-01-01' & ds <= '2022-10-01'")
fig = plot_series(Y_df)

We observe that the time series exhibits seasonal patterns. Moreover, the time series contains 6,552 observations, so it is necessary to use computationally efficient methods to deploy them in production.

Fit and Forecast LightGBM model

Import the MLForecast class and the models you need.

import lightgbm as lgb

from mlforecast import MLForecast
from mlforecast.target_transforms import Differences

First, instantiate the model and define the parameters.

Tip

In this example we are using the default parameters of the lgb.LGBMRegressor model, but you can change them to improve the forecasting performance.

models = [
    lgb.LGBMRegressor(verbosity=-1) # you can include more models here
]

We fit the model by instantiating a MLForecast object with the following required parameters:

  • models: a list of sklearn-like (fit and predict) models.

  • freq: a string indicating the frequency of the data. (See panda’s available frequencies.)

  • target_transforms: Transformations to apply to the target before computing the features. These are restored at the forecasting step.

  • lags: Lags of the target to use as features.

# Instantiate MLForecast class as mlf
mlf = MLForecast(
    models=models,
    freq='H', 
    target_transforms=[Differences([24])],
    lags=range(1, 25)
)

Tip

In this example, we are only using differences and lags to produce features. See the full documentation to see all available features.

The cross_validation method allows the user to simulate multiple historic forecasts, greatly simplifying pipelines by replacing for loops with fit and predict methods. This method re-trains the model and forecast each window. See this tutorial for an animation of how the windows are defined.

Use the cross_validation method to produce all the daily forecasts for September. To produce daily forecasts set the forecasting horizon window_size as 24. In this example we are simulating deploying the pipeline during September, so set the number of windows as 30 (one for each day). Finally, the step size between windows is 24 (equal to the window_size). This ensure to only produce one forecast per day.

Additionally,

  • id_col: identifies each time series.
  • time_col: indetifies the temporal column of the time series.
  • target_col: identifies the column to model.
crossvalidation_df = mlf.cross_validation(
    df=Y_df,
    h=24,
    n_windows=30,
)
crossvalidation_df.head()
unique_iddscutoffyLGBMRegressor
0ERCOT2022-09-01 00:00:002022-08-31 23:00:0045482.47175745685.265537
1ERCOT2022-09-01 01:00:002022-08-31 23:00:0043602.65804343779.819515
2ERCOT2022-09-01 02:00:002022-08-31 23:00:0042284.81734242672.470923
3ERCOT2022-09-01 03:00:002022-08-31 23:00:0041663.15677142091.768192
4ERCOT2022-09-01 04:00:002022-08-31 23:00:0041710.62190442481.403168

Important

When using cross_validation make sure the forecasts are produced at the desired timestamps. Check the cutoff column which specifices the last timestamp before the forecasting window.

Peak Detection

Finally, we use the forecasts in crossvaldation_df to detect the daily hourly demand peaks. For each day, we set the detected peaks as the highest forecasts. In this case, we want to predict one peak (npeaks); depending on your setting and goals, this parameter might change. For example, the number of peaks can correspond to how many hours a battery can be discharged to reduce demand.

npeaks = 1 # Number of peaks

For the ERCOT 4CP detection task we are interested in correctly predicting the highest monthly load. Next, we filter the day in September with the highest hourly demand and predict the peak.

crossvalidation_df = crossvalidation_df.reset_index()[['ds','y','LGBMRegressor']]
max_day = crossvalidation_df.iloc[crossvalidation_df['y'].argmax()].ds.day # Day with maximum load
cv_df_day = crossvalidation_df.query('ds.dt.day == @max_day')
max_hour = cv_df_day['y'].argmax()
peaks = cv_df_day['LGBMRegressor'].argsort().iloc[-npeaks:].values # Predicted peaks

In the following plot we see how the LightGBM model is able to correctly detect the coincident peak for September 2022.

import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=(10, 5))
ax.axvline(cv_df_day.iloc[max_hour]['ds'], color='black', label='True Peak')
ax.scatter(cv_df_day.iloc[peaks]['ds'], cv_df_day.iloc[peaks]['LGBMRegressor'], color='green', label=f'Predicted Top-{npeaks}')
ax.plot(cv_df_day['ds'], cv_df_day['y'], label='y', color='blue')
ax.plot(cv_df_day['ds'], cv_df_day['LGBMRegressor'], label='Forecast', color='red')
ax.set(xlabel='Time', ylabel='Load (MW)')
ax.grid()
ax.legend()
fig.savefig('../../figs/electricity_peak_forecasting__predicted_peak.png', bbox_inches='tight')
plt.close()

Important

In this example we only include September. However, MLForecast and LightGBM can correctly predict the peaks for the 4 months of 2022. You can try this by increasing the n_windows parameter of cross_validation or filtering the Y_df dataset.

Next steps

MLForecast and LightGBM in particular are good benchmarking models for peak detection. However, it might be useful to explore further and newer forecasting algorithms or perform hyperparameter optimization.