Table of Contents

Introduction

Financial time series analysis has been one of the hottest research topics in the recent decades. In this guide, we illustrate the stylized facts of financial time series by real financial data. To characterize these facts, new models different from the Box- Jenkins ones are needed. And for this reason, ARCH models were firstly proposed by R. F. Engle in 1982 and have been extended by a great number of scholars since then. We also demonstrate how to use Python and its libraries to implement ARCH.

As we have known, there are lot of time series that possess the ARCH effect, that is, although the (modeling residual) series is white noise, its squared series may be autocorrelated. What is more, in practice, a large number of financial time series are found having this property so that the ARCH effect has become one of the stylized facts from financial time series.

Stylized Facts of Financial Time Series

Now we briefly list and describe several important stylized facts (features) of financial return series:

  • Fat (heavy) tails: The distribution density function of returns often has fatter (heavier) tails than the tails of the corresponding normal distribution density.

  • ARCH effect: Although the return series can often be seen as a white noise, its squared (and absolute) series may usually be autocorrelated, and these autocorrelations are hardly negative.

  • Volatility clustering: Large changes in returns tend to cluster in time, and small changes tend to be followed by small changes.

  • Asymmetry: As we have know , the distribution of asset returns is slightly negatively skewed. One possible explanation could be that traders react more strongly to unfavorable information than favorable information.

Definition of ARCH Models

Specifically, we give the definition of the ARCH model as follows.

Definition 1. An ARCH(p)\text{ARCH(p)} model with order p1p≥1 is of the form

{Xt=σtεtσt2=ω+α1Xt12+α2Xt22++αpXtp2 \begin{equation} \left\{ \begin{array}{ll} X_t =\sigma_t \varepsilon_t \\ \sigma_{t}^2 =\omega+ \alpha_1 X_{t-1}^2 + \alpha_2 X_{t-2}^2 + \cdots+ \alpha_p X_{t-p}^2 \\ \end{array} \right. \end{equation}

where ω0,αi0\omega ≥ 0, \alpha_i ≥ 0, and αp>0\alpha_p > 0 are constants, εtiid(0,1)\varepsilon_t \sim iid(0, 1), and εt\varepsilon_t is independent of {Xk;kt1}\{X_k;k ≤ t − 1 \}. A stochastic process XtX_t is called an ARCH(p)ARCH(p) process if it satisfies Eq. (1).

By Definition 1, σt2\sigma_{t}^2 (and σt\sigma_t ) is independent of εt\varepsilon_t . Besides, usually it is further assumed that εtN(0,1)\varepsilon_t \sim N(0, 1). Sometimes, however, we need to further suppose that εt\varepsilon_t follows a standardized (skew) Student’s T distribution or a generalized error distribution in order to capture more features of a financial time series.

Let Fs\mathscr{F}_s denote the information set generated by {Xk;ks}\{X_k;k ≤ s \}, namely, the sigma field σ(Xk;ks)\sigma(X_k;k ≤ s). It is easy to see that Fs\mathscr{F}_s is independent of εt\varepsilon_t for any s<ts <t. According to Definition 1 and the properties of the conditional mathematical expectation, we have that

E(XtFt1)=E(σtεtFt1)=σtE(εtFt1)=σtE(εt)=0 \begin{equation} E(X_t|\mathscr{F}_{t−1}) = E(\sigma_t \varepsilon_t|\mathscr{F}_{t−1}) = \sigma_t E( \varepsilon_t|\mathscr{F}_{t−1}) = \sigma_t E(\varepsilon_t) = 0 \tag 2 \end{equation}

and

Var(Xt2Ft1)=E(Xt2Ft1)=E(σt2εt2Ft1)=σt2E(εt2Ft1)=σt2E(εt2)=σt2. \text{Var}(X_{t}^2| \mathscr{F}_{t−1}) = E(X_{t}^2|\mathscr{F}_{t−1}) = E(\sigma_{t}^2 \varepsilon_{t}^2|\mathscr{F}_{t−1}) = \sigma_{t}^2 E(\varepsilon_{t}^2|\mathscr{F}_{t−1}) = \sigma_{t}^2 E(\varepsilon_{t}^2) = \sigma_{t}^2.

This implies that σt2\sigma_{t}^2 is the conditional variance of XtX_t and it evolves according to the previous values of {Xk2;tpkt1}\{X_{k}^2; t −p ≤ k ≤ t −1\} like an AR(p)\text{AR}(p) model. And so Model (1) is named an ARCH(p)\text{ARCH}(p) model.

As an example of ARCH(p)\text{ARCH}(p) models, let us consider the ARCH(1)\text{ARCH(1)} model

{Xt=σtεtσt2=ω+α1Xt12 \begin{equation} \left\{ \begin{array}{ll} \tag 3 X_t =\sigma_t \varepsilon_t \\ \sigma_{t}^2 =\omega+ \alpha_1 X_{t-1}^2 \\ \end{array} \right. \end{equation}

Explicitly, the unconditional mean E(Xt)=E(σtεt)=E(σt)E(εt)=0.E(X_t) = E(\sigma_t \varepsilon_t) = E(\sigma_t) E(\varepsilon_t) = 0.

Additionally, the ARCH(1) model can be expressed as

Xt2=σt2+Xt2σt2=ω+α1Xt12+σt2εt2σt2=ω+α1Xt2+ηtX_{t}^2 =\sigma_{t}^2 +X_{t}^2 − \sigma_{t}^2 =\omega +\alpha_1 X_{t-1}^2 +\sigma_{t}^2 \varepsilon_{t}^2 −\sigma_{t}^2 =\omega +\alpha_1 X_{t}^2 +\eta_t

that is,

Xt2=ω+α1Xt2+ηt \begin{equation} X_{t}^2 =\omega +\alpha_1 X_{t}^2 +\eta_t \tag 4 \end{equation}

where ηt=σt2(εt21)\eta_t = \sigma_{t}^2(\varepsilon_{t}^2 − 1). It can been shown that ηt\eta_t is a new white noise, which is left as an exercise for reader. Hence, if 0<α1<10 < \alpha_1 < 1, Eq. (4) is a stationary AR(1)\text{AR(1)} model for the series Xt2. Thus, the unconditional variance

Var(Xt)=E(Xt2)=E(ω+α1Xt12+ηt)=ω+α1E(Xt2),Var ( X_t ) = E( X_{t}^2 ) = E(\omega+ \alpha_1 X_{t-1}^2 + \eta_t ) = \omega+ \alpha_1 E( X_{t}^2 ) ,

that is, Var(Xt)=E(Xt2)=ω1α1Var (X_t) = E (X_{t}^2 ) =\frac{\omega}{1-\alpha_1}

Moreover, for h>0h > 0, in light of the properties of the conditional mathematical expectation and by (2), we have that

E(Xt+hXt)=E(E(Xt+hXtFt+h1))=E(XtE(Xt+hFt+h1))=0.E(X_{t+h} X_t) = E(E(X_{t+h} X_t|\mathscr{F}_{t+h-1})) = E(X_t E(X_{t+h}|\mathscr{F}_{t+h-1})) = 0.

In conclusion, if 0<α1<10 < \alpha_1 < 1, we have that:

  • Any ARCH(1)\text{ARCH}(1) process {Xt}\{X_t \} defined by Eqs.(3) follows a white noise WN(0,ω/(1α1))WN(0, \omega/(1 − \alpha_1)) .

  • Since Xt2X_{t}^2 is an AR(1)\text{AR}(1) process defined by (4), Corr(Xt2,Xt+h2)=α1h>0\text{Corr}(X_{t}^2,X_{t+h}^2) = \alpha_{1}^{|h|} > 0, which reveals the ARCH effect.

  • It is clear that E(ηtFs)=0E(\eta_t|\mathscr{F}_s)=0 for any t>st>s,and with Eq.(4),for any k>1k>1: Var(Xt+kFt)=E(Xt+K2Ft)Var(X_{t+k} |\mathscr{F}_t ) = E(X_{t+K}^2 |\mathscr{F}_t) =E(ω+α1Xt+k1+ηt+kFt)= E(\omega + \alpha_1 X_{t+k-1}+ \eta_{t+k}|\mathscr{F}_t ) =ω+α1Var(Xt+k1Ft),= \omega + \alpha_1 Var(X_{t+k−1}|\mathscr{F}_t),

which reflects the volatility clustering, that is, large (small) volatility is followed by large (small) one.

In addition, we are able to prove that Xt defined by Eq. (3) has heavier tails than the corresponding normal distribution. At last, note that these properties of the ARCH(1) model can be generalized to ARCH(p) models.

Advantages and disadvantages of the Autoregressive Conditional Heteroskedasticity (ARCH) model:

AdvantagesDisadvantages
- The ARCH model is useful for modeling volatility in financial time series, which is important for investment decision making and risk management.- The ARCH model assumes that the forecast errors are independent and identically distributed, which may not be realistic in some cases.
- The ARCH model takes heteroscedasticity into account, which means that it can model time series with variances that change over time.- The ARCH model can be difficult to fit to data with many parameters, which may require large amounts of data or advanced estimation techniques.
- The ARCH model is relatively easy to use and can be implemented with standard econometrics software.- The ARCH model does not take into account the possible relationship between the mean and the variance of the time series, which may be important in some cases.

Note:

The ARCH model is a useful tool for modeling volatility in financial time series, but like any econometric model, it has limitations and should be used with caution depending on the specific characteristics of the data being modeled.

Autoregressive Conditional Heteroskedasticity (ARCH) Applications

  • Finance - The ARCH model is widely used in finance to model volatility in financial time series, such as stock prices, exchange rates, interest rates, etc.

  • Economics - The ARCH model can be used to model volatility in economic data, such as GDP, inflation, unemployment, among others.

  • Engineering - The ARCH model can be used in engineering to model volatility in data related to energy, climate, pollution, industrial production, among others.

  • Social Sciences - The ARCH model can be used in the social sciences to model volatility in data related to demography, health, education, among others.

  • Biology - The ARCH model can be used in biology to model volatility in data related to evolution, genetics, epidemiology, among others.

Loading libraries and data

Tip

Statsforecast will be needed. To install, see instructions.

Next, we import plotting libraries and configure the plotting style.

import matplotlib.pyplot as plt
from statsmodels.graphics.tsaplots import plot_acf
from statsmodels.graphics.tsaplots import plot_pacf
plt.style.use('fivethirtyeight')
plt.rcParams['lines.linewidth'] = 1.5
dark_style = {
    'figure.facecolor': '#212946',
    'axes.facecolor': '#212946',
    'savefig.facecolor':'#212946',
    'axes.grid': True,
    'axes.grid.which': 'both',
    'axes.spines.left': False,
    'axes.spines.right': False,
    'axes.spines.top': False,
    'axes.spines.bottom': False,
    'grid.color': '#2A3459',
    'grid.linewidth': '1',
    'text.color': '0.9',
    'axes.labelcolor': '0.9',
    'xtick.color': '0.9',
    'ytick.color': '0.9',
    'font.size': 12 }
plt.rcParams.update(dark_style)

from pylab import rcParams
rcParams['figure.figsize'] = (18,7)

Read Data

Let’s pull the S&P500 stock data from the Yahoo Finance site.

import pandas as pd
import time
from datetime import datetime

ticker = '^GSPC'
period1 = int(time.mktime(datetime(2015, 1, 1, 23, 59).timetuple()))
period2 = int(time.mktime(datetime.now().timetuple()))
interval = '1d' # 1d, 1m

query_string = f'https://query1.finance.yahoo.com/v7/finance/download/{ticker}?period1={period1}&period2={period2}&interval={interval}&events=history&includeAdjustedClose=true'

SP_500 = pd.read_csv(query_string)
SP_500.head()
DateOpenHighLowCloseAdj CloseVolume
02015-01-022058.8999022072.3601072046.0400392058.1999512058.1999512708700000
12015-01-052054.4399412054.4399412017.3399662020.5799562020.5799563799120000
22015-01-062022.1500242030.2500001992.4399412002.6099852002.6099854460110000
32015-01-072005.5500492029.6099852005.5500492025.9000242025.9000243805480000
42015-01-082030.6099852064.0800782030.6099852062.1398932062.1398933934010000
df=SP_500[["Date","Close"]]

The input to StatsForecast is always a data frame in long format with three columns: unique_id, ds and y:

  • The unique_id (string, int or category) represents an identifier for the series.

  • The ds (datestamp) column should be of a format expected by Pandas, ideally YYYY-MM-DD for a date or YYYY-MM-DD HH:MM:SS for a timestamp.

  • The y (numeric) represents the measurement we wish to forecast.

df["unique_id"]="1"
df.columns=["ds", "y", "unique_id"]
df.head()
dsyunique_id
02015-01-022058.1999511
12015-01-052020.5799561
22015-01-062002.6099851
32015-01-072025.9000241
42015-01-082062.1398931
print(df.dtypes)
ds            object
y            float64
unique_id     object
dtype: object

We need to convert the object type to datetime.

df["ds"] = pd.to_datetime(df["ds"])

Explore data with the plot method

Plot a series using the plot method from the StatsForecast class. This method prints a random series from the dataset and is useful for basic EDA.

from statsforecast import StatsForecast

StatsForecast.plot(df)

The Augmented Dickey-Fuller Test

An Augmented Dickey-Fuller (ADF) test is a type of statistical test that determines whether a unit root is present in time series data. Unit roots can cause unpredictable results in time series analysis. A null hypothesis is formed in the unit root test to determine how strongly time series data is affected by a trend. By accepting the null hypothesis, we accept the evidence that the time series data is not stationary. By rejecting the null hypothesis or accepting the alternative hypothesis, we accept the evidence that the time series data is generated by a stationary process. This process is also known as stationary trend. The values of the ADF test statistic are negative. Lower ADF values indicate a stronger rejection of the null hypothesis.

Augmented Dickey-Fuller Test is a common statistical test used to test whether a given time series is stationary or not. We can achieve this by defining the null and alternate hypothesis.

Null Hypothesis: Time Series is non-stationary. It gives a time-dependent trend. Alternate Hypothesis: Time Series is stationary. In another term, the series doesn’t depend on time.

ADF or t Statistic < critical values: Reject the null hypothesis, time series is stationary. ADF or t Statistic > critical values: Failed to reject the null hypothesis, time series is non-stationary.

Let’s check if our series that we are analyzing is a stationary series. Let’s create a function to check, using the Dickey Fuller test

from statsmodels.tsa.stattools import adfuller

def Augmented_Dickey_Fuller_Test_func(series , column_name):
    print (f'Dickey-Fuller test results for columns: {column_name}')
    dftest = adfuller(series, autolag='AIC')
    dfoutput = pd.Series(dftest[0:4], index=['Test Statistic','p-value','No Lags Used','Number of observations used'])
    for key,value in dftest[4].items():
       dfoutput['Critical Value (%s)'%key] = value
    print (dfoutput)
    if dftest[1] <= 0.05:
        print("Conclusion:====>")
        print("Reject the null hypothesis")
        print("The data is stationary")
    else:
        print("Conclusion:====>")
        print("The null hypothesis cannot be rejected")
        print("The data is not stationary")
Augmented_Dickey_Fuller_Test_func(df["y"],'S&P500')
Dickey-Fuller test results for columns: S&P500
Test Statistic          -0.814971
p-value                  0.814685
No Lags Used            10.000000
                          ...    
Critical Value (1%)     -3.433341
Critical Value (5%)     -2.862861
Critical Value (10%)    -2.567473
Length: 7, dtype: float64
Conclusion:====>
The null hypothesis cannot be rejected
The data is not stationary

In the previous result we can see that the Augmented_Dickey_Fuller test gives us a p-value of 0.864700, which tells us that the null hypothesis cannot be rejected, and on the other hand the data of our series are not stationary.

We need to differentiate our time series, in order to convert the data to stationary.

Return Series

Since the 1970s, the financial industry has been very prosperous with advancement of computer and Internet technology. Trade of financial products (including various derivatives) generates a huge amount of data which form financial time series. For finance, the return on a financial product is most interesting, and so our attention focuses on the return series. If PtP_t is the closing price at time t for a certain financial product, then the return on this product is

Xt=(PtPt1)Pt1log(Pt)log(Pt1).X_t = \frac{(P_t − P_{t−1})}{P_{t−1}} ≈ log(P_t ) − log(P_{t−1}).

It is return series {Xt}\{X_t \} that have been much independently studied. And important stylized features which are common across many instruments, markets, and time periods have been summarized. Note that if you purchase the financial product, then it becomes your asset, and its returns become your asset returns. Now let us look at the following examples.

We can estimate the series of returns using the pandas, DataFrame.pct_change() function. The pct_change() function has a periods parameter whose default value is 1. If you want to calculate a 30-day return, you must change the value to 30.

df['return'] = 100 * df["y"].pct_change()
df.dropna(inplace=True, how='any')
df.head()
dsyunique_idreturn
12015-01-052020.5799561-1.827811
22015-01-062002.6099851-0.889347
32015-01-072025.90002411.162984
42015-01-082062.13989311.788828
52015-01-092044.8100591-0.840381
import plotly.express as px
fig = px.line(df, x=df["ds"], y="return",title="SP500 Return Chart",template = "plotly_dark")
fig.show()

Creating Squared Returns

df['sq_return'] = df["return"].mul(df["return"])
df.head()
dsyunique_idreturnsq_return
12015-01-052020.5799561-1.8278113.340891
22015-01-062002.6099851-0.8893470.790938
32015-01-072025.90002411.1629841.352532
42015-01-082062.13989311.7888283.199906
52015-01-092044.8100591-0.8403810.706240

Returns vs Squared Returns

from plotly.subplots import make_subplots
import plotly.graph_objects as go

fig = make_subplots(rows=1, cols=2)

fig.add_trace(go.Scatter(x=df["ds"], y=df["return"],
                         mode='lines',
                         name='return'),
row=1, col=1
)


fig.add_trace(go.Scatter(x=df["ds"], y=df["sq_return"],
                         mode='lines',
                         name='sq_return'), 
    row=1, col=2
)

fig.update_layout(height=600, width=800, title_text="Returns vs Squared Returns", template = "plotly_dark")
fig.show()

from scipy.stats import probplot, moment
from statsmodels.tsa.stattools import adfuller, q_stat, acf
import numpy as np
import seaborn as sns

def plot_correlogram(x, lags=None, title=None):    
    lags = min(10, int(len(x)/5)) if lags is None else lags
    fig, axes = plt.subplots(nrows=2, ncols=2, figsize=(14, 8))
    x.plot(ax=axes[0][0], title='Return')
    x.rolling(21).mean().plot(ax=axes[0][0], c='k', lw=1)
    q_p = np.max(q_stat(acf(x, nlags=lags), len(x))[1])
    stats = f'Q-Stat: {np.max(q_p):>8.2f}\nADF: {adfuller(x)[1]:>11.2f}'
    axes[0][0].text(x=.02, y=.85, s=stats, transform=axes[0][0].transAxes)
    probplot(x, plot=axes[0][1])
    mean, var, skew, kurtosis = moment(x, moment=[1, 2, 3, 4])
    s = f'Mean: {mean:>12.2f}\nSD: {np.sqrt(var):>16.2f}\nSkew: {skew:12.2f}\nKurtosis:{kurtosis:9.2f}'
    axes[0][1].text(x=.02, y=.75, s=s, transform=axes[0][1].transAxes)
    plot_acf(x=x, lags=lags, zero=False, ax=axes[1][0])
    plot_pacf(x, lags=lags, zero=False, ax=axes[1][1])
    axes[1][0].set_xlabel('Lag')
    axes[1][1].set_xlabel('Lag')
    fig.suptitle(title+ f'Dickey-Fuller: {adfuller(x)[1]:>11.2f}', fontsize=14)
    sns.despine()
    fig.tight_layout()
    fig.subplots_adjust(top=.9)
plot_correlogram(df["return"], lags=30, title="Time Series Analysis plot \n")

Ljung-Box Test

Ljung-Box is a test for autocorrelation that we can use in tandem with our ACF and PACF plots. The Ljung-Box test takes our data, optionally either lag values to test, or the largest lag value to consider, and whether to compute the Box-Pierce statistic. Ljung-Box and Box-Pierce are two similar test statisitcs, Q , that are compared against a chi-squared distribution to determine if the series is white noise. We might use the Ljung-Box test on the residuals of our model to look for autocorrelation, ideally our residuals would be white noise.

  • Ho : The data are independently distributed, no autocorrelation.
  • Ha : The data are not independently distributed; they exhibit serial correlation.

The Ljung-Box with the Box-Pierce option will return, for each lag, the Ljung-Box test statistic, Ljung-Box p-values, Box-Pierce test statistic, and Box-Pierce p-values.

If p<α(0.05)p<\alpha (0.05) we reject the null hypothesis.

from statsmodels.stats.diagnostic import acorr_ljungbox

ljung_res = acorr_ljungbox(df["return"], lags= 40, boxpierce=True)

ljung_res.head()
lb_statlb_pvaluebp_statbp_pvalue
149.2222732.285409e-1249.1551832.364927e-12
262.9913482.097020e-1462.8992342.195861e-14
363.9449448.433622e-1463.8506638.834380e-14
474.3436522.742989e-1574.2210242.911751e-15
580.2348627.494100e-1680.0934988.022242e-16

Split the data into training and testing

Let’s divide our data into sets

  1. Data to train our ARCH model
  2. Data to test our model

For the test data we will use the last 30 day to test and evaluate the performance of our model.

df=df[["ds","unique_id","return"]]
df.columns=["ds", "unique_id", "y"]
train = df[df.ds<='2023-05-24'] # Let's forecast the last 30 days
test = df[df.ds>'2023-05-24']
train.shape, test.shape
((2112, 3), (87, 3))

Now let’s plot the training data and the test data.

sns.lineplot(train,x="ds", y="y", label="Train")
sns.lineplot(test, x="ds", y="y", label="Test")
plt.show()

Implementation of ARCH with StatsForecast

To also know more about the parameters of the functions of the ARCH Model, they are listed below. For more information, visit the documentation

p : int
    Number of lagged versions of the series.
alias : str
    Custom name of the model.
prediction_intervals : Optional[ConformalIntervals]
    Information to compute conformal prediction intervals.
    By default, the model will compute the native prediction
    intervals.

Load libraries

from statsforecast import StatsForecast 
from statsforecast.models import ARCH

Building Model

Import and instantiate the models. Setting the argument is sometimes tricky. This article on Seasonal periods by the master, Rob Hyndmann, can be useful.season_length.

season_length = 7 # Daily data
horizon = len(test) # number of predictions biasadj=True, include_drift=True,

models = [ARCH(p=2,)]

We fit the models by instantiating a new StatsForecast object with the following parameters:

models: a list of models. Select the models you want from models and import them.

  • freq: a string indicating the frequency of the data. (See pandas’ available frequencies.)

  • n_jobs: n_jobs: int, number of jobs used in the parallel processing, use -1 for all cores.

  • fallback_model: a model to be used if a model fails.

Any settings are passed into the constructor. Then you call its fit method and pass in the historical data frame.

sf = StatsForecast(df=train,
                   models=models,
                   freq='C', # custom business day frequency
                   n_jobs=-1)

Fit the Model

sf.fit()
StatsForecast(models=[ARCH(2)])

Let’s see the results of our ARCH model. We can observe it with the following instruction:

result=sf.fitted_[0,0].model_
result
{'p': 2,
 'q': 0,
 'coeff': array([0.44320919, 0.34706759, 0.35171967]),
 'message': 'Optimization terminated successfully',
 'y_vals': array([-1.12220268, -0.73186004]),
 'sigma2_vals': array([1.38768694,        nan, 1.89277546, ..., 0.76423015, 0.45064543,
        0.88036943]),
 'fitted': array([        nan,         nan,  2.23474473, ..., -1.48032981,
         1.10018826, -0.98050094]),
 'actual_residuals': array([        nan,         nan, -1.07176046, ...,  1.4958333 ,
        -2.22239094,  0.2486409 ])}

Let us now visualize the residuals of our models.

As we can see, the result obtained above has an output in a dictionary, to extract each element from the dictionary we are going to use the .get() function to extract the element and then we are going to save it in a pd.DataFrame().

residual=pd.DataFrame(result.get("actual_residuals"), columns=["residual Model"])
residual
residual Model
0NaN
1NaN
2-1.071760
21091.495833
2110-2.222391
21110.248641
import scipy.stats as stats

fig, axs = plt.subplots(nrows=2, ncols=2)

# plot[1,1]
residual.plot(ax=axs[0,0])
axs[0,0].set_title("Residuals");

# plot
sns.distplot(residual, ax=axs[0,1]);
axs[0,1].set_title("Density plot - Residual");

# plot
stats.probplot(residual["residual Model"], dist="norm", plot=axs[1,0])
axs[1,0].set_title('Plot Q-Q')

# plot
plot_acf(residual,  lags=35, ax=axs[1,1],color="fuchsia")
axs[1,1].set_title("Autocorrelation");

plt.show();

Forecast Method

If you want to gain speed in productive settings where you have multiple series or models we recommend using the StatsForecast.forecast method instead of .fit and .predict.

The main difference is that the .forecast doest not store the fitted values and is highly scalable in distributed environments.

The forecast method takes two arguments: forecasts next h (horizon) and level.

  • h (int): represents the forecast h steps into the future. In this case, 12 months ahead.

  • level (list of floats): this optional parameter is used for probabilistic forecasting. Set the level (or confidence percentile) of your prediction interval. For example, level=[90] means that the model expects the real value to be inside that interval 90% of the times.

The forecast object here is a new data frame that includes a column with the name of the model and the y hat values, as well as columns for the uncertainty intervals. Depending on your computer, this step should take around 1min.

Y_hat = sf.forecast(horizon, fitted=True)

Y_hat
dsARCH(2)
unique_id
12023-05-251.681836
12023-05-26-0.777028
12023-05-29-0.677960
12023-09-200.136752
12023-09-210.082173
12023-09-22-0.450958
values=sf.forecast_fitted_values()
values.head()
dsyARCH(2)
unique_id
12015-01-05-1.827811NaN
12015-01-06-0.889347NaN
12015-01-071.1629842.234745
12015-01-081.788828-0.667577
12015-01-09-0.840381-0.752437

Adding 95% confidence interval with the forecast method

sf.forecast(h=horizon, level=[95])
dsARCH(2)ARCH(2)-lo-95ARCH(2)-hi-95
unique_id
12023-05-251.681836-0.4193223.782995
12023-05-26-0.777028-3.9390442.384989
12023-05-29-0.677960-3.9072442.551323
12023-09-200.136752-0.7953711.068876
12023-09-210.082173-0.8522681.016615
12023-09-22-0.450958-1.3371170.435202
Y_hat=Y_hat.reset_index()
Y_hat
unique_iddsARCH(2)
012023-05-251.681836
112023-05-26-0.777028
212023-05-29-0.677960
8412023-09-200.136752
8512023-09-210.082173
8612023-09-22-0.450958
# Merge the forecasts with the true values
test['unique_id'] = test['unique_id'].astype(int)
Y_hat1 = test.merge(Y_hat, how='left', on=['unique_id', 'ds'])
Y_hat1
dsunique_idyARCH(2)
02023-05-2510.8757581.681836
12023-05-2611.304909-0.777028
22023-05-3010.001660-0.968701
842023-09-261-1.473453NaN
852023-09-2710.022931NaN
862023-09-2810.589317NaN
# Merge the forecasts with the true values

fig, ax = plt.subplots(1, 1)
plot_df = pd.concat([train, Y_hat1]).set_index('ds')
plot_df[['y', "ARCH(2)"]].plot(ax=ax, linewidth=2)
ax.set_title(' Forecast', fontsize=22)
ax.set_ylabel('Year ', fontsize=20)
ax.set_xlabel('Timestamp [t]', fontsize=20)
ax.legend(prop={'size': 15})
ax.grid(True)
plt.show()

Predict method with confidence interval

To generate forecasts use the predict method.

The predict method takes two arguments: forecasts the next h (for horizon) and level.

  • h (int): represents the forecast h steps into the future. In this case, 12 months ahead.

  • level (list of floats): this optional parameter is used for probabilistic forecasting. Set the level (or confidence percentile) of your prediction interval. For example, level=[95] means that the model expects the real value to be inside that interval 95% of the times.

The forecast object here is a new data frame that includes a column with the name of the model and the y hat values, as well as columns for the uncertainty intervals.

This step should take less than 1 second.

sf.predict(h=horizon)
dsARCH(2)
unique_id
12023-05-251.681836
12023-05-26-0.777028
12023-05-29-0.677960
12023-09-200.136752
12023-09-210.082173
12023-09-22-0.450958
forecast_df = sf.predict(h=horizon, level=[80,95]) 

forecast_df
dsARCH(2)ARCH(2)-lo-95ARCH(2)-lo-80ARCH(2)-hi-80ARCH(2)-hi-95
unique_id
12023-05-251.681836-0.4193220.3079633.0557103.782995
12023-05-26-0.777028-3.939044-2.8445591.2905042.384989
12023-05-29-0.677960-3.907244-2.7894751.4335552.551323
12023-09-200.136752-0.795371-0.4727310.7462351.068876
12023-09-210.082173-0.852268-0.5288250.6931721.016615
12023-09-22-0.450958-1.337117-1.0303860.1284710.435202

We can join the forecast result with the historical data using the pandas function pd.concat(), and then be able to use this result for graphing.

df_plot=pd.concat([df, forecast_df]).set_index('ds').tail(220)
df_plot
unique_idyARCH(2)ARCH(2)-lo-95ARCH(2)-lo-80ARCH(2)-hi-80ARCH(2)-hi-95
ds
2023-03-2111.298219NaNNaNNaNNaNNaN
2023-03-221-1.646322NaNNaNNaNNaNNaN
2023-03-2310.298453NaNNaNNaNNaNNaN
2023-09-20NaNNaN0.136752-0.795371-0.4727310.7462351.068876
2023-09-21NaNNaN0.082173-0.852268-0.5288250.6931721.016615
2023-09-22NaNNaN-0.450958-1.337117-1.0303860.1284710.435202
def plot_forecasts(y_hist, y_true, y_pred, models):
    _, ax = plt.subplots(1, 1, figsize = (20, 7))
    y_true = y_true.merge(y_pred, how='left', on=['unique_id', 'ds'])
    df_plot = pd.concat([y_hist, y_true]).set_index('ds').tail(12*10)
    df_plot[['y'] + models].plot(ax=ax, linewidth=2 )
    colors = ['green']
    # Specify graph features:
    ax.fill_between(df_plot.index, 
                df_plot['ARCH(2)-lo-80'], 
                df_plot['ARCH(2)-hi-80'],
                alpha=.20,
                color='lime',
                label='ARCH(2)_level_80')
    ax.fill_between(df_plot.index, 
                df_plot['ARCH(2)-lo-95'], 
                df_plot['ARCH(2)-hi-95'],
                alpha=.2,
                color='white',
                label='ARCH(2)_level_95')
    ax.set_title('', fontsize=22)
    ax.set_ylabel("Return", fontsize=20)
    ax.set_xlabel('Month-Days', fontsize=20)
    ax.legend(prop={'size': 15})
    ax.grid(True)
    plt.show()
plot_forecasts(train, test, forecast_df, models=["ARCH(2)"])

Let’s plot the same graph using the plot function that comes in Statsforecast, as shown below.

Cross-validation

In previous steps, we’ve taken our historical data to predict the future. However, to asses its accuracy we would also like to know how the model would have performed in the past. To assess the accuracy and robustness of your models on your data perform Cross-Validation.

With time series data, Cross Validation is done by defining a sliding window across the historical data and predicting the period following it. This form of cross-validation allows us to arrive at a better estimation of our model’s predictive abilities across a wider range of temporal instances while also keeping the data in the training set contiguous as is required by our models.

The following graph depicts such a Cross Validation Strategy:

Perform time series cross-validation

Cross-validation of time series models is considered a best practice but most implementations are very slow. The statsforecast library implements cross-validation as a distributed operation, making the process less time-consuming to perform. If you have big datasets you can also perform Cross Validation in a distributed cluster using Ray, Dask or Spark.

In this case, we want to evaluate the performance of each model for the last 5 months (n_windows=5), forecasting every second months (step_size=12). Depending on your computer, this step should take around 1 min.

The cross_validation method from the StatsForecast class takes the following arguments.

  • df: training data frame

  • h (int): represents h steps into the future that are being forecasted. In this case, 12 months ahead.

  • step_size (int): step size between each window. In other words: how often do you want to run the forecasting processes.

  • n_windows(int): number of windows used for cross validation. In other words: what number of forecasting processes in the past do you want to evaluate.

crossvalidation_df = sf.cross_validation(df=train,
                                         h=horizon,
                                         step_size=6,
                                         n_windows=5)

The crossvaldation_df object is a new data frame that includes the following columns:

  • unique_id: index. If you dont like working with index just run crossvalidation_df.resetindex()
  • ds: datestamp or temporal index
  • cutoff: the last datestamp or temporal index for the n_windows.
  • y: true value
  • "model": columns with the model’s name and fitted value.
crossvalidation_df
dscutoffyARCH(2)
unique_id
12022-12-212022-12-20-0.6052721.889850
12022-12-222022-12-20-2.492167-0.850434
12022-12-232022-12-20-1.113775-0.742012
12023-05-222023-01-230.0155030.135570
12023-05-232023-01-23-1.1222030.081367
12023-05-242023-01-23-0.731860-0.446374

Model Evaluation

We can now compute the accuracy of the forecast using an appropiate accuracy metric. Here we’ll use the Root Mean Squared Error (RMSE). To do this, we first need to install datasetsforecast, a Python library developed by Nixtla that includes a function to compute the RMSE.

!pip install datasetsforecast
from datasetsforecast.losses import rmse

The function to compute the RMSE takes two arguments:

  1. The actual values.
  2. The forecasts, in this case, ARCH.
rmse = rmse(crossvalidation_df['y'], crossvalidation_df["ARCH(2)"])
print("RMSE using cross-validation: ", rmse)
RMSE using cross-validation:  1.3816124

As you have noticed, we have used the cross validation results to perform the evaluation of our model.

Now we are going to evaluate our model with the results of the predictions, we will use different types of metrics MAE, MAPE, MASE, RMSE, SMAPE to evaluate the accuracy.

from datasetsforecast.losses import mae, mape, mase, rmse, smape
def evaluate_performace(y_hist, y_true, y_pred, model):
    y_true = y_true.merge(y_pred, how='left', on=['unique_id', 'ds'])
    evaluation = {}
    evaluation[model] = {}
    for metric in [mase, mae, mape, rmse, smape]:
        metric_name = metric.__name__
        if metric_name == 'mase':
            evaluation[model][metric_name] = metric(y_true['y'].values, 
                                                y_true[model].values, 
                                                y_hist['y'].values, seasonality=12)
        else:
            evaluation[model][metric_name] = metric(y_true['y'].values, y_true[model].values)
    return pd.DataFrame(evaluation).T
evaluate_performace(train, test, Y_hat, model="ARCH(2)")
maemapemasermsesmape
ARCH(2)0.9354851064.149021NaN1.152612138.403076

Acknowledgements

We would like to thank Naren Castellon for writing this tutorial.

References

  1. Changquan Huang • Alla Petukhina. Springer series (2022). Applied Time Series Analysis and Forecasting with Python.
  2. Engle, R. F. (1982). Autoregressive conditional heteroscedasticity with estimates of the variance of United Kingdom inflation. Econometrica: Journal of the econometric society, 987-1007..
  3. James D. Hamilton. Time Series Analysis Princeton University Press, Princeton, New Jersey, 1st Edition, 1994.
  4. Nixtla Parameters.
  5. Pandas available frequencies.
  6. Rob J. Hyndman and George Athanasopoulos (2018). “Forecasting principles and practice, Time series cross-validation”..
  7. Seasonal periods- Rob J Hyndman.