Volatility forecasting (GARCH & ARCH)
In this example, we’ll forecast the volatility of the S&P 500 and several publicly traded companies using GARCH and ARCH models
Prerequesites
This tutorial assumes basic familiarity with StatsForecast. For a minimal example visit the Quick Start
Introduction
The Generalized Autoregressive Conditional Heteroskedasticity (GARCH) model is used for time series that exhibit non-constant volatility over time. Here volatility refers to the conditional standard deviation. The GARCH(p,q) model is given by
where is independent and identically distributed with zero mean and unit variance, and evolves according to
The coefficients in the equation above must satisfy the following conditions:
- , for all , and for all
- . Here it is assumed that for and for .
A particular case of the GARCH model is the ARCH model, in which . Both models are commonly used in finance to model the volatility of stock prices, exchange rates, interest rates, and other financial instruments. They’re also used in risk management to estimate the probability of large variations in the price of financial assets.
By the end of this tutorial, you’ll have a good understanding of how to implement a GARCH or an ARCH model in StatsForecast and how they can be used to analyze and predict financial time series data.
Outline:
- Install libraries
- Load and explore the data
- Train models
- Perform time series cross-validation
- Evaluate results
- Forecast volatility
Tip
You can use Colab to run this Notebook interactively
Install libraries
We assume that you have StatsForecast already installed. If not, check this guide for instructions on how to install StatsForecast
Install the necessary packages using pip install statsforecast
Load and explore the data
In this tutorial, we’ll use the last 5 years of prices from the S&P 500
and several publicly traded companies. The data can be downloaded from
Yahoo! Finance using yfinance.
To install it, use pip install yfinance
.
We’ll also need pandas
to deal with the dataframes.
Adj Close | … | Volume | |||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
AAPL | AMZN | GOOG | META | MSFT | NFLX | NKE | NVDA | SPY | TSLA | … | AAPL | AMZN | GOOG | META | MSFT | NFLX | NKE | NVDA | SPY | TSLA | |
Date | |||||||||||||||||||||
2018-01-01 | 39.741604 | 72.544502 | 58.497002 | 186.889999 | 89.248772 | 270.299988 | 64.929787 | 60.830006 | 258.821686 | 23.620667 | … | 2638717600 | 1927424000 | 574768000 | 495655700 | 574258400 | 238377600 | 157812200 | 1145621600 | 1985506700 | 1864072500 |
2018-02-01 | 42.279007 | 75.622498 | 55.236500 | 178.320007 | 88.083969 | 291.380005 | 63.797192 | 59.889591 | 249.410812 | 22.870667 | … | 3711577200 | 2755680000 | 847640000 | 516251600 | 725663300 | 184585800 | 160317000 | 1491552800 | 2923722000 | 1637850000 |
2018-03-01 | 39.987053 | 72.366997 | 51.589500 | 159.789993 | 86.138298 | 295.350006 | 63.235649 | 57.348976 | 241.606750 | 17.742001 | … | 2854910800 | 2608002000 | 907066000 | 996201700 | 750754800 | 263449400 | 174066700 | 1411844000 | 2323561800 | 2359027500 |
2018-04-01 | 39.386456 | 78.306503 | 50.866501 | 172.000000 | 88.261810 | 312.459991 | 65.288467 | 55.692314 | 243.828018 | 19.593332 | … | 2664617200 | 2598392000 | 834318000 | 750072700 | 668130700 | 262006000 | 158981900 | 1114400800 | 1998466500 | 2854662000 |
2018-05-01 | 44.536777 | 81.481003 | 54.249500 | 191.779999 | 93.282692 | 351.600006 | 68.543846 | 62.450180 | 249.755264 | 18.982000 | … | 2483905200 | 1432310000 | 636988000 | 401144100 | 509417900 | 142050800 | 129566300 | 1197824000 | 1606397200 | 2333671500 |
The data downloaded includes different prices. We’ll use the adjusted closing price, which is the closing price after accounting for any corporate actions like stock splits or dividend distributions. It is also the price that is used to examine historical returns.
Notice that the dataframe that yfinance
returns has a
MultiIndex,
so we need to select both the adjusted price and the tickers.
Date | SPY | MSFT | AAPL | GOOG | AMZN | TSLA | NVDA | META | NKE | NFLX | |
---|---|---|---|---|---|---|---|---|---|---|---|
0 | 2018-01-01 | 258.821686 | 89.248772 | 39.741604 | 58.497002 | 72.544502 | 23.620667 | 60.830006 | 186.889999 | 64.929787 | 270.299988 |
1 | 2018-02-01 | 249.410812 | 88.083969 | 42.279007 | 55.236500 | 75.622498 | 22.870667 | 59.889591 | 178.320007 | 63.797192 | 291.380005 |
2 | 2018-03-01 | 241.606750 | 86.138298 | 39.987053 | 51.589500 | 72.366997 | 17.742001 | 57.348976 | 159.789993 | 63.235649 | 295.350006 |
3 | 2018-04-01 | 243.828018 | 88.261810 | 39.386456 | 50.866501 | 78.306503 | 19.593332 | 55.692314 | 172.000000 | 65.288467 | 312.459991 |
4 | 2018-05-01 | 249.755264 | 93.282692 | 44.536777 | 54.249500 | 81.481003 | 18.982000 | 62.450180 | 191.779999 | 68.543846 | 351.600006 |
The input to StatsForecast is a dataframe in long
format with
three columns: unique_id
, ds
and y
:
unique_id
: (string, int or category) A unique identifier for the series.ds
: (datestamp or int) A datestamp in format YYYY-MM-DD or YYYY-MM-DD HH:MM:SS or an integer indexing time.y
: (numeric) The measurement we wish to forecast.
Hence, we need to reshape the data. We’ll do this by creating a new
dataframe called price
.
unique_id | ds | y | |
---|---|---|---|
0 | SPY | 2018-01-01 | 258.821686 |
1 | SPY | 2018-02-01 | 249.410812 |
2 | SPY | 2018-03-01 | 241.606750 |
3 | SPY | 2018-04-01 | 243.828018 |
4 | SPY | 2018-05-01 | 249.755264 |
… | … | … | … |
595 | NFLX | 2022-08-01 | 223.559998 |
596 | NFLX | 2022-09-01 | 235.440002 |
597 | NFLX | 2022-10-01 | 291.880005 |
598 | NFLX | 2022-11-01 | 305.529999 |
599 | NFLX | 2022-12-01 | 294.880005 |
We can plot this series using the plot
method of the StatsForecast
class.
With the prices, we can compute the logarithmic returns of the S&P 500 and the publicly traded companies. This is the variable we’re interested in since it’s likely to work well with the GARCH framework. The logarithmic return is given by
We’ll compute the returns on the price dataframe and then we’ll create a
return dataframe with StatsForecast’s format. To do this, we’ll need
numpy
.
unique_id | ds | y | |
---|---|---|---|
0 | SPY | 2018-01-01 | NaN |
1 | SPY | 2018-02-01 | -0.037038 |
2 | SPY | 2018-03-01 | -0.031790 |
3 | SPY | 2018-04-01 | 0.009152 |
4 | SPY | 2018-05-01 | 0.024018 |
… | … | … | … |
595 | NFLX | 2022-08-01 | -0.005976 |
596 | NFLX | 2022-09-01 | 0.051776 |
597 | NFLX | 2022-10-01 | 0.214887 |
598 | NFLX | 2022-11-01 | 0.045705 |
599 | NFLX | 2022-12-01 | -0.035479 |
Warning
If the order of the data is very small (say ),
scipy.optimize.minimize
might not terminate successfully. In this case, rescale the data and then generate the GARCH or ARCH model.
From this plot, we can see that the returns seem suited for the GARCH framework, since large shocks tend to be followed by other large shocks. This doesn’t mean that after every large shock we should expect another one; merely that the probability of a large variance is greater than the probability of a small one.
Train models
We first need to import the
GARCH and the
ARCH models from
statsforecast.models
, and then we need to fit them by instantiating a
new StatsForecast object. Notice that we’ll be using different values of
and . In the next section, we’ll determine which ones produce the
most accurate model using cross-validation. We’ll also import the
Naive model
since we’ll use it as a baseline.
To instantiate a new StatsForecast object, we need the following parameters:
df
: The dataframe with the training data.models
: The list of models defined in the previous step.freq
: A string indicating the frequency of the data. Here we’ll use MS, which correspond to the start of the month. You can see the list of panda’s available frequencies here.n_jobs
: An integer that indicates the number of jobs used in parallel processing. Use -1 to select all cores.
Perform time series cross-validation
Time series cross-validation is a method for evaluating how a model
would have performed in the past. It works by defining a sliding window
across the historical data and predicting the period following it. Here
we’ll use StatsForercast’s cross-validation
method to determine the
most accurate model for the S&P 500 and the companies selected.
This method takes the following arguments:
df
: The dataframe with the training data.h
(int): represents the h steps into the future that will be forecasted.step_size
(int): step size between each window, meaning how often do you want to run the forecasting process.n_windows
(int): number of windows used for cross-validation, meaning the number of forecasting processes in the past you want to evaluate.
For this particular example, we’ll use 4 windows of 3 months, or all the quarters in a year.
The crossvalidation_df
object ia a dataframe with the following
columns:
unique_id
: index.ds
: datestamp or temporal indexcutoff
: the last datestamp or temporal index for then_windows
.y
: true value"model"
: columns with the model’s name and fitted value.
unique_id | ds | cutoff | actual | ARCH(1) | ARCH(2) | GARCH(1,1) | GARCH(1,2) | GARCH(2,2) | GARCH(2,1) | Naive | |
---|---|---|---|---|---|---|---|---|---|---|---|
0 | AAPL | 2022-01-01 | 2021-12-01 | -0.015837 | 0.142416 | 0.144013 | 0.142951 | 0.226098 | 0.141690 | 0.144018 | 0.073061 |
1 | AAPL | 2022-02-01 | 2021-12-01 | -0.056855 | -0.056896 | -0.057158 | -0.056387 | -0.087001 | -0.058787 | -0.057161 | 0.073061 |
2 | AAPL | 2022-03-01 | 2021-12-01 | 0.057156 | -0.045899 | -0.046478 | -0.047512 | -0.073625 | -0.045714 | -0.046479 | 0.073061 |
3 | AAPL | 2022-04-01 | 2022-03-01 | -0.102178 | 0.138661 | 0.140211 | 0.136213 | 0.136124 | 0.136127 | 0.136546 | 0.057156 |
4 | AAPL | 2022-05-01 | 2022-03-01 | -0.057505 | -0.056013 | -0.056268 | -0.054599 | -0.057080 | -0.057085 | -0.053791 | 0.057156 |
A tutorial on cross-validation can be found here.
Evaluate results
To compute the accuracy of the forecasts, we’ll use the mean average error (mae), which is the sum of the absolute errors divided by the number of forecasts. There’s an implementation of MAE on datasetsforecast, so we’ll install it and then import the mae function.
The MAE needs to be computed for every window and then it needs to be averaged across all of them. To do this, we’ll create the following function.
ARCH(1) | ARCH(2) | GARCH(1,1) | GARCH(1,2) | GARCH(2,2) | GARCH(2,1) | Naive | |
---|---|---|---|---|---|---|---|
unique_id | |||||||
AAPL | 0.068537 | 0.068927 | 0.068929 | 0.085630 | 0.072519 | 0.068556 | 0.110426 |
AMZN | 0.118612 | 0.126182 | 0.118858 | 0.125470 | 0.109913 | 0.109912 | 0.115189 |
GOOG | 0.093849 | 0.093752 | 0.099593 | 0.115136 | 0.094648 | 0.113645 | 0.083233 |
META | 0.198333 | 0.198891 | 0.199617 | 0.199712 | 0.199708 | 0.198890 | 0.185346 |
MSFT | 0.080022 | 0.097301 | 0.082183 | 0.072765 | 0.073006 | 0.080494 | 0.086951 |
NFLX | 0.159384 | 0.159523 | 0.219658 | 0.231798 | 0.230077 | 0.224103 | 0.167421 |
NKE | 0.107842 | 0.114263 | 0.103097 | 0.107180 | 0.107179 | 0.107019 | 0.160405 |
NVDA | 0.189462 | 0.207875 | 0.199004 | 0.196172 | 0.211928 | 0.211928 | 0.215289 |
SPY | 0.058513 | 0.065498 | 0.058700 | 0.057051 | 0.057051 | 0.058526 | 0.089012 |
TSLA | 0.192003 | 0.192620 | 0.190225 | 0.192353 | 0.191620 | 0.191418 | 0.218857 |
Hence, the most accurate model to describe the logarithmic returns of Apple’s stock is an ARCH(1), for Amazon’s stock is a GARCH(2,1), and so on.
Forecast volatility
We can now generate a forecast for the next quarter. To do this, we’ll
use the forecast
method, which requieres the following arguments:
h
: (int) The forecasting horizon.level
: (list[float]) The confidence levels of the prediction intervalsfitted
: (bool = False) Returns insample predictions.
unique_id | ds | ARCH(1) | ARCH(1)-lo-95 | ARCH(1)-lo-80 | ARCH(1)-hi-80 | ARCH(1)-hi-95 | ARCH(2) | ARCH(2)-lo-95 | ARCH(2)-lo-80 | … | GARCH(2,1) | GARCH(2,1)-lo-95 | GARCH(2,1)-lo-80 | GARCH(2,1)-hi-80 | GARCH(2,1)-hi-95 | Naive | Naive-lo-80 | Naive-lo-95 | Naive-hi-80 | Naive-hi-95 | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | AAPL | 2023-01-01 | 0.150457 | 0.133641 | 0.139462 | 0.161452 | 0.167273 | 0.150166 | 0.133415 | 0.139213 | … | 0.147610 | 0.131424 | 0.137027 | 0.158193 | 0.163795 | -0.128762 | -0.284463 | -0.366886 | 0.026939 | 0.109362 |
1 | AAPL | 2023-02-01 | -0.056942 | -0.073923 | -0.068046 | -0.045839 | -0.039961 | -0.057209 | -0.074349 | -0.068417 | … | -0.059511 | -0.078059 | -0.071639 | -0.047384 | -0.040964 | -0.128762 | -0.348956 | -0.465520 | 0.091433 | 0.207997 |
2 | AAPL | 2023-03-01 | -0.048390 | -0.064842 | -0.059148 | -0.037633 | -0.031939 | -0.049279 | -0.066340 | -0.060435 | … | -0.054537 | -0.075435 | -0.068201 | -0.040874 | -0.033640 | -0.128762 | -0.398444 | -0.541205 | 0.140920 | 0.283681 |
3 | AMZN | 2023-01-01 | 0.152158 | 0.134960 | 0.140913 | 0.163404 | 0.169357 | 0.148659 | 0.132243 | 0.137925 | … | 0.148597 | 0.132195 | 0.137872 | 0.159322 | 0.165000 | -0.139141 | -0.315716 | -0.409190 | 0.037435 | 0.130909 |
4 | AMZN | 2023-02-01 | -0.057306 | -0.074504 | -0.068551 | -0.046060 | -0.040107 | -0.061187 | -0.080794 | -0.074007 | … | -0.069302 | -0.094455 | -0.085749 | -0.052856 | -0.044150 | -0.139141 | -0.388856 | -0.521048 | 0.110575 | 0.242767 |
With the results of the previous section, we can choose the best model
for the S&P 500 and the companies selected. Some of the plots are shown
below. Notice that we’re using somo additional arguments in the plot
method:
level
: (list[int]) The confidence levels for the prediction intervals (this was already defined).unique_ids
: (list[str, int or category]) The ids to plot.models
: (list(str)). The model to plot. In this case, is the model selected by cross-validation.