Prerequesites

This tutorial assumes basic familiarity with StatsForecast. For a minimal example visit the Quick Start

Introduction

The Generalized Autoregressive Conditional Heteroskedasticity (GARCH) model is used for time series that exhibit non-constant volatility over time. Here volatility refers to the conditional standard deviation. The GARCH(p,q) model is given by

where vtv_t is independent and identically distributed with zero mean and unit variance, and σt\sigma_t evolves according to

The coefficients in the equation above must satisfy the following conditions:

  1. w>0w>0, αi0\alpha_i \geq 0 for all ii, and βj0\beta_j \geq 0 for all jj
  2. k=1max(p,q)αk+βk<1\sum_{k=1}^{max(p,q)} \alpha_k + \beta_k < 1. Here it is assumed that αi=0\alpha_i=0 for i>pi>p and βj=0\beta_j=0 for j>qj>q.

A particular case of the GARCH model is the ARCH model, in which q=0q=0. Both models are commonly used in finance to model the volatility of stock prices, exchange rates, interest rates, and other financial instruments. They’re also used in risk management to estimate the probability of large variations in the price of financial assets.

By the end of this tutorial, you’ll have a good understanding of how to implement a GARCH or an ARCH model in StatsForecast and how they can be used to analyze and predict financial time series data.

Outline:

  1. Install libraries
  2. Load and explore the data
  3. Train models
  4. Perform time series cross-validation
  5. Evaluate results
  6. Forecast volatility

Tip

You can use Colab to run this Notebook interactively

Install libraries

We assume that you have StatsForecast already installed. If not, check this guide for instructions on how to install StatsForecast

Install the necessary packages using pip install statsforecast

pip install statsforecast -U

Load and explore the data

In this tutorial, we’ll use the last 5 years of prices from the S&P 500 and several publicly traded companies. The data can be downloaded from Yahoo! Finance using yfinance. To install it, use pip install yfinance.

# pip install yfinance

We’ll also need pandas to deal with the dataframes.

import yfinance as yf
import pandas as pd
tickers = ['SPY', 'MSFT', 'AAPL', 'GOOG', 'AMZN', 'TSLA', 'NVDA', 'META', 'NKE', 'NFLX'] 
df = yf.download(tickers, start = '2018-01-01', end = '2022-12-31', interval='1mo', progress=False) # use monthly prices
df.head()
PriceAdj CloseVolume
TickerAAPLAMZNGOOGMETAMSFTNFLXNKENVDASPYTSLAAAPLAMZNGOOGMETAMSFTNFLXNKENVDASPYTSLA
Date
2018-01-0139.38808472.54450258.353695186.32897988.027702270.29998863.3418626.078998252.56521623.620667263871760019274240005747680004956557005742584002383776001578122001145621600019855067001864072500
2018-02-0141.90290875.62249855.101181177.78472986.878807291.38000562.2369385.985018243.38188222.870667371157720027556800008476400005162516007256633001845858001603170001491552800029237220001637850000
2018-03-0139.63134472.36699751.463116159.31033384.959763295.35000661.6891335.731123235.76637317.742001285491080026080020009070660009962017007507548002634494001740667001411844000023235618002359027500
2018-04-0139.03610678.30650350.741886171.48368887.054207312.45999163.6917615.565567237.93400619.593332266461720025983920008343180007500727006681307002620060001589819001114400800019984665002854662000
2018-05-0144.14059881.48100354.116600191.20431592.006393351.60000666.8675086.240908243.71795718.982000248390520014323100006369880004011441005094179001420508001295663001197824000016063972002333671500

The data downloaded includes different prices. We’ll use the adjusted closing price, which is the closing price after accounting for any corporate actions like stock splits or dividend distributions. It is also the price that is used to examine historical returns.

Notice that the dataframe that yfinance returns has a MultiIndex, so we need to select both the adjusted price and the tickers.

df = df.loc[:, (['Adj Close'], tickers)]
df.columns = df.columns.droplevel() # drop MultiIndex
df = df.reset_index()
df.head()
TickerDateSPYMSFTAAPLGOOGAMZNTSLANVDAMETANKENFLX
02018-01-01252.56521688.02770239.38808458.35369572.54450223.6206676.078998186.32897963.341862270.299988
12018-02-01243.38188286.87880741.90290855.10118175.62249822.8706675.985018177.78472962.236938291.380005
22018-03-01235.76637384.95976339.63134451.46311672.36699717.7420015.731123159.31033361.689133295.350006
32018-04-01237.93400687.05420739.03610650.74188678.30650319.5933325.565567171.48368863.691761312.459991
42018-05-01243.71795792.00639344.14059854.11660081.48100318.9820006.240908191.20431566.867508351.600006

The input to StatsForecast is a dataframe in long format with three columns: unique_id, ds and y:

  • unique_id: (string, int or category) A unique identifier for the series.
  • ds: (datestamp or int) A datestamp in format YYYY-MM-DD or YYYY-MM-DD HH:MM:SS or an integer indexing time.
  • y: (numeric) The measurement we wish to forecast.

Hence, we need to reshape the data. We’ll do this by creating a new dataframe called price.

prices = df.melt(id_vars = 'Date')
prices = prices.rename(columns={'Date': 'ds', 'Ticker': 'unique_id', 'value': 'y'})
prices = prices[['unique_id', 'ds', 'y']]
prices
unique_iddsy
0SPY2018-01-01252.565216
1SPY2018-02-01243.381882
2SPY2018-03-01235.766373
3SPY2018-04-01237.934006
4SPY2018-05-01243.717957
595NFLX2022-08-01223.559998
596NFLX2022-09-01235.440002
597NFLX2022-10-01291.880005
598NFLX2022-11-01305.529999
599NFLX2022-12-01294.880005

We can plot this series using the plot method of the StatsForecast class.

from statsforecast import StatsForecast
StatsForecast.plot(prices)

With the prices, we can compute the logarithmic returns of the S&P 500 and the publicly traded companies. This is the variable we’re interested in since it’s likely to work well with the GARCH framework. The logarithmic return is given by

returnt=log(pricetpricet1)return_t = log \big( \frac{price_t}{price_{t-1}} \big)

We’ll compute the returns on the price dataframe and then we’ll create a return dataframe with StatsForecast’s format. To do this, we’ll need numpy.

import numpy as np 
prices['rt'] = prices['y'].div(prices.groupby('unique_id')['y'].shift(1))
prices['rt'] = np.log(prices['rt'])

returns = prices[['unique_id', 'ds', 'rt']]
returns = returns.rename(columns={'rt':'y'})
returns
unique_iddsy
0SPY2018-01-01NaN
1SPY2018-02-01-0.037038
2SPY2018-03-01-0.031790
3SPY2018-04-010.009152
4SPY2018-05-010.024018
595NFLX2022-08-01-0.005976
596NFLX2022-09-010.051776
597NFLX2022-10-010.214887
598NFLX2022-11-010.045705
599NFLX2022-12-01-0.035479

Warning

If the order of the data is very small (say <1e5<1e-5), scipy.optimize.minimize might not terminate successfully. In this case, rescale the data and then generate the GARCH or ARCH model.

StatsForecast.plot(returns)

From this plot, we can see that the returns seem suited for the GARCH framework, since large shocks tend to be followed by other large shocks. This doesn’t mean that after every large shock we should expect another one; merely that the probability of a large variance is greater than the probability of a small one.

Train models

We first need to import the GARCH and the ARCH models from statsforecast.models, and then we need to fit them by instantiating a new StatsForecast object. Notice that we’ll be using different values of pp and qq. In the next section, we’ll determine which ones produce the most accurate model using cross-validation. We’ll also import the Naive model since we’ll use it as a baseline.

from statsforecast.models import (
    GARCH, 
    ARCH, 
    Naive
)

models = [ARCH(1), 
          ARCH(2), 
          GARCH(1,1),
          GARCH(1,2),
          GARCH(2,2),
          GARCH(2,1),
          Naive()
]

To instantiate a new StatsForecast object, we need the following parameters:

  • df: The dataframe with the training data.
  • models: The list of models defined in the previous step.
  • freq: A string indicating the frequency of the data. Here we’ll use MS, which correspond to the start of the month. You can see the list of panda’s available frequencies here.
  • n_jobs: An integer that indicates the number of jobs used in parallel processing. Use -1 to select all cores.
sf = StatsForecast(
    models = models, 
    freq = 'MS',
    n_jobs = -1
)

Perform time series cross-validation

Time series cross-validation is a method for evaluating how a model would have performed in the past. It works by defining a sliding window across the historical data and predicting the period following it. Here we’ll use StatsForercast’s cross-validation method to determine the most accurate model for the S&P 500 and the companies selected.

This method takes the following arguments:

  • df: The dataframe with the training data.
  • h (int): represents the h steps into the future that will be forecasted.
  • step_size (int): step size between each window, meaning how often do you want to run the forecasting process.
  • n_windows (int): number of windows used for cross-validation, meaning the number of forecasting processes in the past you want to evaluate.

For this particular example, we’ll use 4 windows of 3 months, or all the quarters in a year.

cv_df = sf.cross_validation(
    df = returns,
    h = 3,
    step_size = 3,
    n_windows = 4
  )

The cv_df object ia a dataframe with the following columns:

  • unique_id: series identifier.
  • ds: datestamp or temporal index
  • cutoff: the last datestamp or temporal index for the n_windows.
  • y: true value
  • "model": columns with the model’s name and fitted value.
cv_df.rename(columns = {'y' : 'actual'}, inplace = True)
cv_df.head()
unique_iddscutoffactualARCH(1)ARCH(2)GARCH(1,1)GARCH(1,2)GARCH(2,2)GARCH(2,1)Naive
0AAPL2022-01-012021-12-01-0.0158370.1424210.1440160.1429540.1416820.1416820.1440150.073061
1AAPL2022-02-012021-12-01-0.056856-0.056893-0.057158-0.056388-0.058786-0.058785-0.0571580.073061
2AAPL2022-03-012021-12-010.057156-0.045901-0.046479-0.047513-0.045711-0.045711-0.0464780.073061
3AAPL2022-04-012022-03-01-0.1021780.1386500.1402220.2281380.1361180.1361320.1402110.057156
4AAPL2022-05-012022-03-01-0.057505-0.056007-0.056268-0.087833-0.057078-0.057085-0.0562650.057156
StatsForecast.plot(returns, cv_df.drop(['cutoff', 'actual'], axis=1))

A tutorial on cross-validation can be found here.

Evaluate results

To compute the accuracy of the forecasts, we’ll use the mean average error (mae), which is the sum of the absolute errors divided by the number of forecasts.

from utilsforecast.losses import mae

The MAE needs to be computed for every window and then it needs to be averaged across all of them. To do this, we’ll create the following function.

models = cv_df.columns.drop(['unique_id', 'ds', 'cutoff', 'actual'])
mae_cv = mae(cv_df, models=models, target_col='actual').set_index('unique_id')
mae_cv
ARCH(1)ARCH(2)GARCH(1,1)GARCH(1,2)GARCH(2,2)GARCH(2,1)Naive
unique_id
AAPL0.0717730.0689270.0801820.0753210.0691870.0688170.110426
AMZN0.1273900.1136130.1188590.1199300.1099100.1099100.115189
GOOG0.0938490.0937530.1096620.1015830.0946480.1033890.083233
META0.1983340.1988930.1996150.1997110.1997120.1988920.185346
MSFT0.0823730.0750550.0722410.0727650.0730060.0820660.086951
NFLX0.1593860.1595280.1996230.2324770.2300750.2307700.167421
NKE0.1083370.0989180.1033660.1102780.1071790.1027080.160404
NVDA0.1894610.2078710.1989990.1961700.2119320.2119400.215289
SPY0.0585110.0585830.0587010.0624920.0570530.0681920.089012
TSLA0.1920030.1926180.1902250.1923540.1916200.1914230.218857
mae_cv.idxmin(axis=1)
unique_id
AAPL    GARCH(2,1)
AMZN    GARCH(2,2)
GOOG         Naive
META         Naive
MSFT    GARCH(1,1)
NFLX       ARCH(1)
NKE        ARCH(2)
NVDA       ARCH(1)
SPY     GARCH(2,2)
TSLA    GARCH(1,1)
dtype: object

Hence, the most accurate model to describe the logarithmic returns of Apple’s stock is an GARCH(2, 1), for Amazon’s stock is a GARCH(2,2), and so on.

Forecast volatility

We can now generate a forecast for the next quarter. To do this, we’ll use the forecast method, which requieres the following arguments:

  • h: (int) The forecasting horizon.
  • level: (list[float]) The confidence levels of the prediction intervals
  • fitted : (bool = False) Returns insample predictions.
levels = [80, 95] # confidence levels for the prediction intervals 

forecasts = sf.forecast(df=returns, h=3, level=levels)
forecasts.head()
unique_iddsARCH(1)ARCH(1)-lo-95ARCH(1)-lo-80ARCH(1)-hi-80ARCH(1)-hi-95ARCH(2)ARCH(2)-lo-95ARCH(2)-lo-80GARCH(2,1)GARCH(2,1)-lo-95GARCH(2,1)-lo-80GARCH(2,1)-hi-80GARCH(2,1)-hi-95NaiveNaive-lo-80Naive-lo-95Naive-hi-80Naive-hi-95
0AAPL2023-01-010.1504570.1336410.1394620.1614530.1672730.1501580.1334090.1392060.1476020.1314180.1370200.1581840.163786-0.128762-0.284462-0.3668850.0269390.109362
1AAPL2023-02-01-0.056943-0.073924-0.068046-0.045839-0.039961-0.057207-0.074346-0.068414-0.059512-0.078060-0.071640-0.047384-0.040964-0.128762-0.348956-0.4655200.0914330.207997
2AAPL2023-03-01-0.048391-0.064843-0.059148-0.037633-0.031939-0.049282-0.066345-0.060439-0.054539-0.075438-0.068204-0.040875-0.033641-0.128762-0.398443-0.5412040.1409200.283681
3AMZN2023-01-010.1521470.1349520.1409040.1633910.1693430.1486580.1322420.1379240.1485990.1321960.1378730.1593240.165001-0.139141-0.315716-0.4091900.0374350.130909
4AMZN2023-02-01-0.057301-0.074497-0.068545-0.046058-0.040106-0.061187-0.080794-0.074007-0.069303-0.094457-0.085750-0.052856-0.044150-0.139141-0.388856-0.5210480.1105750.242767

With the results of the previous section, we can choose the best model for the S&P 500 and the companies selected. Some of the plots are shown below. Notice that we’re using somo additional arguments in the plot method:

  • level: (list[int]) The confidence levels for the prediction intervals (this was already defined).
  • unique_ids: (list[str, int or category]) The ids to plot.
  • models: (list(str)). The model to plot. In this case, is the model selected by cross-validation.
StatsForecast.plot(returns, forecasts, max_insample_length=20)

References