Table of Contents

Introduction

The autoregressive time series model (AutoRegressive) is a statistical technique used to analyze and predict univariate time series. In essence, the autoregressive model is based on the idea that previous values of the time series can be used to predict future values.

In this model, the dependent variable (the time series) returns to itself at different moments in time, creating a dependency relationship between past and present values. The idea is that past values can help us understand and predict future values of the series.

The autoregressive model can be fitted to different orders, which indicate how many past values are used to predict the present value. For example, an autoregressive model of order 1 (AR(1))(AR(1)) uses only the immediately previous value to predict the current value, while an autoregressive model of order p(AR(p))p (AR(p)) uses the pp previous values.

The autoregressive model is one of the basic models of time series analysis and is widely used in a variety of fields, from finance and economics to meteorology and social sciences. The model’s ability to capture nonlinear dependencies in time series data makes it especially useful for forecasting and long-term trend analysis.

In a multiple regression model, we forecast the variable of interest using a linear combination of predictors. In an autoregression model, we forecast the variable of interest using a linear combination of past values of the variable. The term autoregression indicates that it is a regression of the variable against itself.

Definition of Autoregressive Models

Before giving a formal definition of the ARCH model, let’s define the components of an ARCH model in a general way:

  • Autoregressive, a concept that we have already known, is the construction of a univariate time series model using statistical methods, which means that the current value of a variable is influenced by past values of itself in different periods.
  • Heteroscedasticity means that the model can have different magnitudes or variability at different time points (variance changes over time).
  • Conditional, since volatility is not fixed, the reference here is the constant that we put in the model to limit heteroscedasticity and make it conditionally dependent on the previous value or values of the variable.

The AR model is the most basic building block of univariate time series. As you have seen before, univariate time series are a family of models that use only information about the target variable’s past to forecast its future, and do not rely on other explanatory variables.

Definition 1. (1) The following equation is called the autoregressive model of order pp and denoted by AR(p)\text{AR(p)}:

Xt=φ0+φ1Xt1+φ2Xt2++φpXtp+εt \begin{equation} Xt =\varphi_0 +\varphi_1X_{t−1}+\varphi_2X_{t−2}+\cdots+\varphi_p X_{t−p}+\varepsilon_t \tag 1 \end{equation}

where {εt}WN(0,σϵ2)\{\varepsilon_t \} \sim WN(0,\sigma_{\epsilon}^2), E(Xsεt)=0E(X_s \varepsilon_t) = 0 if s<ts < t and φ0,φ1,,φp\varphi_0,\varphi_1,\cdots ,\varphi_p are real-valued parameters (coefficients) with φp0\varphi_p \neq 0.

  1. If a time series {Xt}\{X_t \} is stationary and satisfies such an equation as (1), then we call it an AR(p)\text{AR(p)} process.

Note the following remarks about this definition:

  • For simplicity, we often assume that the intercept (const term) φ0=0\varphi_0 = 0; otherwise, we can consider {Xtμ}\{X_t −\mu \} where μ=φ0/(1φ1φp)\mu =\varphi_0 /(1−\varphi_1 − \cdots −\varphi_p).

  • We distinguish the concept of AR\text{AR} models from the concept of AR\text{AR} processes. AR\text{AR} models may or may not be stationary and AR\text{AR} processes must be stationary.

  • E(Xsεt)=0(s<t)E(X_s \varepsilon_t) = 0(s < t) means that XsX_s in the past has nothing to do with εt\varepsilon_t at the current time tt.

  • Like the definition of MA models, sometimes εt in Eq.(1) is called the innovation or shock term.

In addition, using the backshift(see) operator BB, the AR(p)\text{AR(p)} model can be rewritten as

φ(B)Xt=εt\varphi(B)X_t = \varepsilon_t

where φ(z)=1φ1zφpzp\varphi(z) = 1 − \varphi_1z − \cdots − \varphi_p z^p is called the (corresponding) AR\text{AR} polynomial. Besides, in the Python package |StatsModels|, φ(B)\varphi(B) is called the AR\text{AR} lag polynomial.

Definition of PACF

Let {Xt}\{X_t \} be a stationary time series with E(Xt)=0E(X_t) = 0. Here the assumption E(Xt)=0E(X_t ) = 0 is for conciseness only. If E(Xt)=μ0E(X_t) = \mu \neq 0, it is okay to replace {Xt}\{X_t \} by {Xtμ}\{X_t − \mu \}. Now consider the linear regression (prediction) of XtX_t on {Xtk+1:t1}\{X_{t−k+1:t−1} \} for any integer k2k ≥ 2. We use X^t\hat X_t to denote this regression (prediction):

X^t=α1Xt1++αk1Xtk+1\hat X_t =\alpha_1 X_{t−1}+ \cdots +\alpha_{k−1} X_{t−k+1}

where {α1,,αk1}\{\alpha_1, \cdots , \alpha_{k−1} \} satisfy

{α1,,αk1}=arg minβ1,,βk1E[Xt(β1Xt1++βk1Xtk+1)]2\{\alpha_1, \cdots , \alpha_{k−1} \}=\argmin_{β1,···,βk−1} E[X_t −(\beta_1 X_{t−1} +\cdots +\beta_{k−1}X_{t−k+1})]^2

That is, {α1,,αk1}\{\alpha_1, \cdots , \alpha_{k−1} \} are chosen by minimizing the mean squared error of prediction. Similarly, let X^tk\hat X_{t −k} denote the regression (prediction) of XtkX_{t −k} on {Xtk+1:t1}\{X_{t −k+1:t −1} \}:

X^tk=η1Xt1++ηk1Xtk+1\hat X_{t−k} =\eta_1 X_{t−1}+ \cdots +\eta_{k−1} X_{t−k+1}

Note that if {Xt}\{X_t \} is stationary, then {α1:k1}={η1:k1}\{ \alpha_{1:k−1}\} = \{\eta_{1:k−1} \}. Now let Z^tk=XtkX^tk\hat Z_{t−k} = X_{t−k} − \hat X_{t−k} and Z^t=XtX^t\hat Z_t = X_t − \hat X_t. Then Z^tk\hat Z_{t−k} is the residual of removing the effect of the intervening variables {Xtk+1:t1}\{X_{t−k+1:t−1} \} from XtkX_{t−k}, and Z^t\hat Z_t is the residual of removing the effect of {Xtk+1:t1}\{X_{t −k+1:t −1} \} from XtX_t.

Definition 2. The partial autocorrelation function(PACF) at lag kk of astationary time series {Xt}\{X_t \} with E(Xt)=0E(X_t ) = 0 is

ϕ11=Corr(Xt1,Xt)=Cov(Xt1,Xt)[Var(Xt1)Var(Xt)]1/2=ρ1\phi_{11} = Corr(X_{t−1}, X_t ) = \frac{Cov(X_{t−1}, X_t )} {[Var(X_{t−1})Var(X_t)]^1/2}=\rho_1

and

ϕkk=Corr(Z^tk,Z^t)=Cov(Z^tk,Z^t)[Var(Z^tk)Var(Z^t)]1/2\phi_{kk} = Corr(\hat Z_{t−k},\hat Z_t)=\frac{Cov(\hat Z_{t−k},\hat Z_t)} {[Var(\hat Z_{t−k})Var(\hat Z_t)]^{1/2}}

According to the property of correlation coefficient (see, e.g., P172, Casella and Berger 2002), |φkk| ≤ 1. On the other hand, the following theorem paves the way to estimate the PACF of a stationary time series, and its proof can be seen in Fan and Yao (2003).

On the other hand, the following theorem paves the way to estimate the PACF of a stationary time series, and its proof can be seen in Fan and Yao (2003).

Theorem 1. Let {Xt}\{X_t \} be a stationary time series with E(Xt)=0E(X_t) = 0, and {a1k,,akk}\{a_{1k},\cdots ,a_{kk} \} satisfy

{a1k,,akk}=arg mina1,,akE(Xta1Xt1akXtk)2\{a_{1k},\cdots,a_{kk} \}=\argmin_{a_1 ,\cdots ,a_k} E(X_{t −a1}X_{t−1}−\cdots −a_k X_{t−k})^2

Then ϕkk=akk\phi_{kk}=a_{kk} for k1k≥1.

Properties of Autoregressive Models

From the AR(p)\text{AR(p)} model, namely, Eq. (1), we can see that it is in the same form as the multiple linear regression model. However, it explains current itself with its own past. Given the past

{X(tp):(t1)}={x(tp):(t1)}\{X_{(t−p):(t−1)} \} = \{x_{(t−p):(t−1)} \}

we have E(XtX(tp):(t1))=φ0+φ1xt1+φ2xt2++φpxtpE(X_t |X_{(t−p):(t−1)}) = \varphi_0 + \varphi_1x_{t−1} + \varphi_2 x_{t−2} + \cdots + \varphi_p x_{t−p}

This suggests that given the past, the right-hand side of this equation is a good estimate of XtX_t . Besides

Var(XtX(tp):(t1))=Var(εt)=σε2Var(X_t |X_{(t −p):(t −1)}) = Var(\varepsilon_t ) = \sigma_{\varepsilon}^2

Now we suppose that the AR(p) model, namely, Eq. (1), is stationary; then we have

  1. The model mean E(Xt)=μ=φ0/(1φ1φp)E(_Xt)=\mu =\varphi_0 / (1−\varphi_1−···−\varphi_p) .Thus,themodelmean μ=0\mu=0 if and only if φ0=0\varphi_0 =0.

  2. If the mean is zero or φ0=0\varphi_0 = 0 ((3) and (4) below have the same assumption), noting that E(Xtεt)=σε2E(X_t \varepsilon_t ) = \sigma_{\varepsilon}^2 , we multiply Eq. (1) by XtX_t , take expectations, and then get

Var(Xt)=γ0=φ1γ1+φ2γ2++φpγp+σε2\text {Var} (X_t) = \gamma_0 = \varphi_1 \gamma_1 + \varphi_2 \gamma2 + \cdots + \varphi_p \gamma_p + \sigma_{\varepsilon}^2

Furthermore

γ0=σε2/(1φ1ρ1φ2ρ2φpρp)\gamma_0 = \sigma_{\varepsilon}^2 / ( 1 − \varphi_1 \rho_1 − \varphi_2 \rho_2 − \cdots − \varphi_p \rho_p )

  1. For all k>pk > p, the partial autocorrelation ϕkk=0\phi_{kk} = 0, that is, the PACF of AR(p)\text{AR(p)} models cuts off after lag pp, which is very helpful in identifying an AR\text{AR} model. In fact, at this point, the predictor or regression of XtX_t on {Xtk+1:t1}\{X_{t−k+1:t−1} \} is

X^t=φ1Xt1++φk1Xtk+1\hat X_t =\varphi_1 X_{t−1}+\cdots +\varphi_{k−1} X_{t−k+1}

Thus, XtX^t=εtX_t − \hat X_t = \varepsilon_t. Moreover, XtkX^tkX_{t−k} − \hat X_{t−k} is a function of {Xtk:t1}\{ X_{t−k:t−1} \}, and εt\varepsilon_t is uncorrelated to everyone in {Xtk:t1}\{X_{t−k:t−1} \}. Therefore

Cov(XtkX^tk,XtX^t)=Cov(XtkX^tk,εt)=0.Cov(X_{t−k} −\hat X_{t−k},X_t −\hat X_t)=Cov(X_{t−k} −\hat X_{t−k},\varepsilon_t)=0.

By Definition 2, ϕkk=0\phi_{kk} = 0.

  1. We multiply Eq.(1)by XtkX_{t−k},take expectations,divide by γ0\gamma_0,and then obtain the recursive relationship between the autocorrelations:
for k1,ρk=φ1ρk1+φ2ρk2++φpρkp \begin{equation} for \ k ≥ 1, \rho_k = \varphi_1 \rho_{k−1} + \varphi_2 \rho_{k−2} + \cdots + \varphi_p \rho_{k−p} \tag 2 \end{equation}

For Eq.(2), let k=1,2,,pk = 1,2,··· ,p. Then we arrive at a set of difference equations, which is known as the Yule-Walker equations. If the ACF{ρ1:p}\text{ACF} \{\rho_{1:p} \} are given, then we can solve the Yule-Walker equations to obtain the estimates for {φ1:p}\{\varphi_{1:p} \}, and the solutions are called the Yule-Walker estimates.

  1. Since the model is a stationary AR(p)\text{AR(p)} now, naturally it satisfies Xt=φ1Xt1+φ2Xt2++φpXtp+εtX_t =\varphi_1 X_{t−1}+ \varphi_2 X_{t−2} + \cdots + \varphi_p X_{t−p} + \varepsilon_t. Hence ϕpp=φp\phi_{pp} = \varphi_p. If the AR(p)\text{AR(p)} model is further Gaussian and a sample of size T\text{T} is given, then (a) ϕ^ppφp\hat \phi_{pp} → \varphi_p as TT → ∞; (b) according to Quenouille (1949), for k>p,Tϕ^kkk > p, \sqrt{T} \hat \phi_{kk} asymptotically follows the standard normal(Gaussian) distribution N(0,1)\text{N(0,1)}, or ϕkk\phi_{kk} is asymptotically distributed as N(0, 1/T )\text{N(0, 1/T )}.

Stationarity and Causality of AR Models

Consider the AR(1) model:

Xt=φXt1+εt,εtWN(0,σε2) \begin{equation} X_t = \varphi X_{t − 1} + \varepsilon_t , \varepsilon_t \sim W N( 0 , \sigma_{\varepsilon}^2 ) \tag 3 \end{equation}

For φ<1|\varphi|<1,let X1t=j=0φjεtjX_{1t} =\sum_{j=0}^{\infty} \varphi^j \varepsilon_{t−j} and for φ>1|\varphi|>1,let X2t=j=1φjεt+jX_{2t} =− \sum_{j=1}^{\infty} \varphi^{-j} \varepsilon_{t+j}. It is easy to show that both {X1t}\{X_{1t } \} and {X2t}\{X_{2t } \} are stationary and satisfy Eq. (3). That is, both are the stationary solution of Eq. (3). This gives rise to a question: which one of both is preferable? Obviously, {X2t}\{X_{2t } \} depends on future values of unobservable {εt}\{\varepsilon_t \}, and so it is unnatural. Hence we take {X1t}\{X_{1t } \} and abandon {X2t}\{X_{2t } \}. In other words, we require that the coefficient φ\varphi in Eq. (3) is less 1 in absolute value. At this point, the AR(1)\text{AR}(1) model is said to be causal and its causal expression is Xt=j=0φjεtjX_t = \sum_{j=0}^{\infty} \varphi^j \varepsilon_{t−j}. In general, the definition of causality is given below.

Definition 3 (1) A time series {Xt}\{X_t \} is causal if there exist coefficients ψj\psi_j such that

Xt=j=0ψjεtj,  j=0ψj<X_t =\sum_{j=0}^{\infty} \psi_j \varepsilon_{t-j}, \ \ \sum_{j=0}^{\infty} |\psi_j |< \infty

where ψ0=1,{εt}WN(0,σε2)\psi_0 = 1, \{\varepsilon_t \} \sim WN(0, \sigma_{\varepsilon}^2 ). At this point, we say that the time series {Xt}\{X_t \} has an MA()\text{MA}(\infty) representation.

  1. We say that a model is causal if the time series generated by it is causal.

Causality suggests that the time series {Xt}\{X_t\} is caused by the white noise (or innovations) from the past up to time t . Besides, the time series {X2t}\{X_{2t } \} is an example that is stationary but not causal. In order to determine whether an AR\text{AR} model is causal, similar to the invertibility for the MA\text{MA} model, we have the following theorem.

Theorem 2(CausalityTheorem) An AR\text{AR} model defined by Eq.(1) is causal if and only if the roots of its AR\text{AR} polynomial φ(z)=1φ1zφpzp\varphi(z)=1−\varphi_1 z− \cdots − \varphi_p z^p exceed 1 in modulus or lie outside the unit circle on the complex plane.

Note the following remarks: * In the light of the existence and uniqueness on page 75 of Brockwell and Davis (2016), an AR\text{AR} model defined by Eq.(1) is stationary if and only if its AR\text{AR} polynomial φ(z)=1φ1zφpzp0\varphi(z)=1−\varphi_1 z− \cdots − \varphi_p z^p \neq 0 for all z=1|z|=1 or all the roots of the AR\text{AR} polynomial do not lie on the unit circle. Hence for the AR model defined by Eq. (1), its stationarity condition is weaker than its causality condition.

  • A causal time series is surely a stationary one. So an AR\text{AR} model that satisfies the causal condition is naturally stationary. But a stationary AR\text{AR} model is not necessarily causal.

  • If the time series {Xt}\{X_t \} generated by Eq. (1) is not from the remote past, namely, tT=,n,,1,0,1,,n,t \in T = {\cdots ,−n,\cdots ,−1,0,1,\cdots ,n,\cdots}

but starts from an initial value X0X_0, then it may be nonstationary, not to mention causality.

  • According to the relationship between the roots and the coefficients of the degree 2 polynomial φ(z)=1φ1zφ2z2\varphi(z) = 1 − \varphi_1 z − \varphi_2 z^2, it may be proved that both of the roots of the polynomial exceed 1 in modulus if and only if

Thus, we can conveniently use the three inequations to decide whether a AR(2)\text{AR(2)} model is causal or not.

  • It may be shown that for an AR(p)\text{AR(p)} model defined by Eq. (1), the coefficients {ψj}\{\psi_j \} in Definition 3 satisfy ψ0=1\psi_0=1 and

ψj=k=1jφkψjk,  j1 where  φk=φk  if  kp  and  φk=0  if  k>p\psi_j=\sum_{k=1}^{j} \varphi '_k \psi_{j-k}, \ \ j \geq 1 \ where \ \ \varphi '_k =\varphi_k \ \ if \ \ k \leq p \ \ and \ \ \varphi '_k =0 \ \ if \ \ k>p

Autocorrelation: the past influences the present

The autoregressive model describes a relationship between the present of a variable and its past. Therefore, it is suitable for variables in which the past and present values are correlated.

As an intuitive example, consider the waiting line at the doctor. Imagine that the doctor has a plan in which each patient has 20 minutes with him. If each patient takes exactly 20 minutes, this works well. But what if a patient takes a little longer? An autocorrelation could be present if the duration of one query has an impact on the duration of the next query. So if the doctor needs to speed up an appointment because the previous appointment took too long, look at a correlation between the past and the present. Past values influence future values.

Positive and negative autocorrelation

Like “regular” correlation, autocorrelation can be positive or negative. Positive autocorrelation means that a high value now is likely to give a high value in the next period. This can be observed, for example, in stock trading: as soon as a lot of people want to buy a stock, its price goes up. This positive trend makes people want to buy this stock even more as it has positive returns. The more people buy the stock, the higher it goes and the more people will want to buy it.

A positive correlation also works in downtrends. If today’s stock value is low, tomorrow’s value is likely to be even lower as people start selling. When many people sell, the value falls, and even more people will want to sell. This is also a case of positive autocorrelation since the past and the present go in the same direction. If the past is low, the present is low; and if the past is high, the present is high.

There is negative autocorrelation if two trends are opposite. This is the case in the example of the duration of the doctor’s visit. If one query takes longer, the next one will be shorter. If one visit takes less time, the doctor may take a little longer for the next one.

Stationarity and the ADF test

The problem of having a trend in our data is general in univariate time series modeling. The stationarity of a time series means that a time series does not have a (long-term) trend: it is stable around the same average. Otherwise, a time series is said to be non-stationary.

In theory, AR models can have a trend coefficient in the model, but since stationarity is an important concept in general time series theory, it’s best to learn to deal with it right away. Many models can only work on stationary time series.

A time series that is growing or falling strongly over time is obvious to spot. But sometimes it’s hard to tell if a time series is stationary. This is where the Augmented Dickey Fuller (ADF) test comes in handy.

Loading libraries and data

Tip

Statsforecast will be needed. To install, see instructions.

Next, we import plotting libraries and configure the plotting style.

import pandas as pd

import scipy.stats as stats

import statsmodels.api as sm
import statsmodels.tsa.api as smt
import matplotlib.pyplot as plt
import seaborn as sns
from statsmodels.graphics.tsaplots import plot_acf, plot_pacf

plt.style.use('fivethirtyeight')
plt.rcParams['lines.linewidth'] = 1.5
dark_style = {
    'figure.facecolor': '#212946',
    'axes.facecolor': '#212946',
    'savefig.facecolor':'#212946',
    'axes.grid': True,
    'axes.grid.which': 'both',
    'axes.spines.left': False,
    'axes.spines.right': False,
    'axes.spines.top': False,
    'axes.spines.bottom': False,
    'grid.color': '#2A3459',
    'grid.linewidth': '1',
    'text.color': '0.9',
    'axes.labelcolor': '0.9',
    'xtick.color': '0.9',
    'ytick.color': '0.9',
    'font.size': 12 }
plt.rcParams.update(dark_style)

from pylab import rcParams
rcParams['figure.figsize'] = (18,7)

Read Data

df= pd.read_csv("https://raw.githubusercontent.com/Naren8520/Serie-de-tiempo-con-Machine-Learning/main/Data/catfish.csv")
df.head()
DateTotal
01986-1-019034
11986-2-019596
21986-3-0110558
31986-4-019002
41986-5-019239

The input to StatsForecast is always a data frame in long format with three columns: unique_id, ds and y:

  • The unique_id (string, int or category) represents an identifier for the series.

  • The ds (datestamp) column should be of a format expected by Pandas, ideally YYYY-MM-DD for a date or YYYY-MM-DD HH:MM:SS for a timestamp.

  • The y (numeric) represents the measurement we wish to forecast.

df["unique_id"]="1"
df.columns=["ds", "y", "unique_id"]
df.head()
dsyunique_id
01986-1-0190341
11986-2-0195961
21986-3-01105581
31986-4-0190021
41986-5-0192391
print(df.dtypes)
ds           object
y             int64
unique_id    object
dtype: object

We can see that our time variable ds is in an object format, we need to convert to a date format

df["ds"] = pd.to_datetime(df["ds"])

Explore data with the plot method

Plot some series using the plot method from the StatsForecast class. This method prints 8 random series from the dataset and is useful for basic EDA.

from statsforecast import StatsForecast

StatsForecast.plot(df)

The Augmented Dickey-Fuller Test

An Augmented Dickey-Fuller (ADF) test is a type of statistical test that determines whether a unit root is present in time series data. Unit roots can cause unpredictable results in time series analysis. A null hypothesis is formed in the unit root test to determine how strongly time series data is affected by a trend. By accepting the null hypothesis, we accept the evidence that the time series data is not stationary. By rejecting the null hypothesis or accepting the alternative hypothesis, we accept the evidence that the time series data is generated by a stationary process. This process is also known as stationary trend. The values of the ADF test statistic are negative. Lower ADF values indicate a stronger rejection of the null hypothesis.

Augmented Dickey-Fuller Test is a common statistical test used to test whether a given time series is stationary or not. We can achieve this by defining the null and alternate hypothesis.

Null Hypothesis: Time Series is non-stationary. It gives a time-dependent trend. Alternate Hypothesis: Time Series is stationary. In another term, the series doesn’t depend on time.

ADF or t Statistic < critical values: Reject the null hypothesis, time series is stationary. ADF or t Statistic > critical values: Failed to reject the null hypothesis, time series is non-stationary.

Let’s check if our series that we are analyzing is a stationary series. Let’s create a function to check, using the Dickey Fuller test

from statsmodels.tsa.stattools import adfuller
def Augmented_Dickey_Fuller_Test_func(series , column_name):
    print (f'Dickey-Fuller test results for columns: {column_name}')
    dftest = adfuller(series, autolag='AIC')
    dfoutput = pd.Series(dftest[0:4], index=['Test Statistic','p-value','No Lags Used','Number of observations used'])
    for key,value in dftest[4].items():
       dfoutput['Critical Value (%s)'%key] = value
    print (dfoutput)
    if dftest[1] <= 0.05:
        print("Conclusion:====>")
        print("Reject the null hypothesis")
        print("The data is stationary")
    else:
        print("Conclusion:====>")
        print("The null hypothesis cannot be rejected")
        print("The data is not stationary")
Augmented_Dickey_Fuller_Test_func(df["y"],'Sales')
Dickey-Fuller test results for columns: Sales
Test Statistic          -1.589903
p-value                  0.488664
No Lags Used            14.000000
                          ...    
Critical Value (1%)     -3.451691
Critical Value (5%)     -2.870939
Critical Value (10%)    -2.571778
Length: 7, dtype: float64
Conclusion:====>
The null hypothesis cannot be rejected
The data is not stationary

In the previous result we can see that the Augmented_Dickey_Fuller test gives us a p-value of 0.488664, which tells us that the null hypothesis cannot be rejected, and on the other hand the data of our series are not stationary.

We need to differentiate our time series, in order to convert the data to stationary.

Augmented_Dickey_Fuller_Test_func(df["y"].diff().dropna(),"Sales")
Dickey-Fuller test results for columns: Sales
Test Statistic          -4.310935
p-value                  0.000425
No Lags Used            17.000000
                          ...    
Critical Value (1%)     -3.451974
Critical Value (5%)     -2.871063
Critical Value (10%)    -2.571844
Length: 7, dtype: float64
Conclusion:====>
Reject the null hypothesis
The data is stationary

By applying a differential, our time series now is stationary.

def tsplot(y, lags=None, figsize=(12, 7), style='bmh'): # [3]
    if not isinstance(y, pd.Series):
        y = pd.Series(y)
        
    with plt.style.context(style):    
        fig = plt.figure(figsize=figsize)
        layout = (2, 2)
        ts_ax = plt.subplot2grid(layout, (0, 0), colspan=2)
        acf_ax = plt.subplot2grid(layout, (1, 0))
        pacf_ax = plt.subplot2grid(layout, (1, 1))
        
        y.plot(ax=ts_ax)
        p_value = sm.tsa.stattools.adfuller(y)[1]
        ts_ax.set_title('Time Series Analysis plot\n Dickey-Fuller: p={0:.5f}'.format(p_value))
        smt.graphics.plot_acf(y, lags=lags, ax=acf_ax)
        smt.graphics.plot_pacf(y, lags=lags, ax=pacf_ax)
        plt.tight_layout()
tsplot(df["y"].diff().dropna(), lags=20);

As you can see, based on the blue background shaded area of the graph, the PACF shows the first, second, third, fourth, sixth, seventh, ninth, and tenth etc. delay outside the shaded area. This means that it would be interesting to also include these lags in the AR model.

How many lags should we include?

Now, the big question in time series analysis is always how many lags to include. This is called the order of the time series. The notation is AR(1) for order 1 and AR(p) for order p.

The order is up to you. Theoretically speaking, you can base your order on the PACF chart. Theory tells you to take the number of lags before you get an autocorrelation of 0. All other lags should be 0.

In theory, you often see great charts where the first peak is very high and the rest equal zero. In those cases, the choice is easy: you are working with a very “pure” example of AR(1). Another common case is when your autocorrelation starts high and slowly decreases to zero. In this case, you should use all delays where the PACF is not yet zero.

However, in practice, it is not always that simple. Remember the famous saying “all models are wrong, but some are useful”. It is very rare to find cases that fit an AR model perfectly. In general, the autoregression process can help explain part of the variation of a variable, but not all.

In practice, you will try to select the number of lags that gives your model the best predictive performance. The best predictive performance is often not defined by looking at autocorrelation plots: those plots give you a theoretical estimate. However, predictive performance is best defined by model evaluation and benchmarking, using the techniques you have seen in Module 2. Later in this module, we will see how to use model evaluation to choose a performance order for the AR model. But before we get into that, it’s time to dig into the exact definition of the AR model.

Split the data into training and testing

Let’s divide our data into sets 1. Data to train our AutoRegressive model 2. Data to test our model

For the test data we will use the last 12 months to test and evaluate the performance of our model.

train = df[df.ds<='2011-12-01'] 
test = df[df.ds>'2011-12-01']
train.shape, test.shape
((312, 3), (12, 3))

Now let’s plot the training data and the test data.

sns.lineplot(train,x="ds", y="y", label="Train")
sns.lineplot(test, x="ds", y="y", label="Test")
plt.show()

Implementation of AutoRegressive with StatsForecast

To also know more about the parameters of the functions of the AutoRegressive Model, they are listed below. For more information, visit the documentation

lags : int or list
    Number of lags to include in the model.
    If an int is passed then all lags up to `lags` are considered.
    If a list, only the elements of the list are considered as lags.
include_mean : bool (default=True)
    Should the AutoRegressive model include a mean term?
    The default is True for undifferenced series, False for differenced ones (where a mean would not affect the fit nor predictions).
include_drift : bool (default=False)
    Should the AutoRegressive model include a linear drift term?
    (i.e., a linear regression with AutoRegressive errors is fitted.)
blambda : float, optional (default=None)
    Box-Cox transformation parameter.
biasadj : bool (default=False)
    Use adjusted back-transformed mean Box-Cox.
method : str (default='CSS-ML')
    Fitting method: maximum likelihood or minimize conditional sum-of-squares.
    The default (unless there are missing values) is to use conditional-sum-of-squares to find starting values, then maximum likelihood.
fixed : dict, optional (default=None)
    Dictionary containing fixed coefficients for the AutoRegressive model. Example: `{'ar1': 0.5, 'ar5': 0.75}`.
    For autoregressive terms use the `ar{i}` keys.
alias : str
    Custom name of the model.
prediction_intervals : Optional[ConformalIntervals]
    Information to compute conformal prediction intervals.
    By default, the model will compute the native prediction
    intervals.

Load libraries

from statsforecast import StatsForecast
from statsforecast.models import AutoRegressive

Instantiating Model

Import and instantiate the models. Setting the argument is sometimes tricky. This article on Seasonal periods by the master, Rob Hyndmann, can be useful.season_length.

Method 1: We use the lags parameter in an integer format, that is, we put the lags we want to evaluate in the model.

season_length = 12 # Monthly data 
horizon = len(test) # number of predictions biasadj=True, include_drift=True,

models2 = [AutoRegressive(lags=[14], include_mean=True)]

Method 2: We use the lags parameter in a list format, that is, we put the lags that we want to evaluate in the model in the form of a list as shown below.

season_length = 12 # Monthly data 
horizon = len(test) # number of predictions

models = [AutoRegressive(lags=[3,4,6,7,9,10,11,12,13,14], include_mean=True)]

We fit the models by instantiating a new StatsForecast object with the following parameters:

models: a list of models. Select the models you want from models and import them.

  • freq: a string indicating the frequency of the data. (See pandas’s available frequencies.)

  • n_jobs: n_jobs: int, number of jobs used in the parallel processing, use -1 for all cores.

  • fallback_model: a model to be used if a model fails.

Any settings are passed into the constructor. Then you call its fit method and pass in the historical data frame.

sf = StatsForecast(df=train,
                   models=models,
                   freq='MS', 
                   n_jobs=-1)

Fit Model

sf.fit()
StatsForecast(models=[AutoRegressive])

Let’s see the results of our Theta model. We can observe it with the following instruction:

result=sf.fitted_[0,0].model_
print(result.keys())
dict_keys(['coef', 'sigma2', 'var_coef', 'mask', 'loglik', 'aic', 'arma', 'residuals', 'code', 'n_cond', 'nobs', 'model', 'aicc', 'bic', 'xreg', 'lambda', 'x'])

Let us now visualize the residuals of our models.

As we can see, the result obtained above has an output in a dictionary, to extract each element from the dictionary we are going to use the .get() function to extract the element and then we are going to save it in a pd.DataFrame().

residual=pd.DataFrame(result.get("residuals"), columns=["residual Model"])
residual
residual Model
0-11990.435671
1NaN
2NaN
309-2717.643794
310-1306.165240
311-2712.663024
fig, axs = plt.subplots(nrows=2, ncols=2)

# plot[1,1]
residual.plot(ax=axs[0,0])
axs[0,0].set_title("Residuals");

# plot
sns.distplot(residual, ax=axs[0,1]);
axs[0,1].set_title("Density plot - Residual");

# plot
stats.probplot(residual["residual Model"], dist="norm", plot=axs[1,0])
axs[1,0].set_title('Plot Q-Q')

# plot
plot_acf(residual,  lags=35, ax=axs[1,1],color="fuchsia")
axs[1,1].set_title("Autocorrelation");

plt.show();

Forecast Method

If you want to gain speed in productive settings where you have multiple series or models we recommend using the StatsForecast.forecast method instead of .fit and .predict.

The main difference is that the .forecast doest not store the fitted values and is highly scalable in distributed environments.

The forecast method takes two arguments: forecasts next h (horizon) and level.

  • h (int): represents the forecast h steps into the future. In this case, 12 months ahead.

  • level (list of floats): this optional parameter is used for probabilistic forecasting. Set the level (or confidence percentile) of your prediction interval. For example, level=[90] means that the model expects the real value to be inside that interval 90% of the times.

The forecast object here is a new data frame that includes a column with the name of the model and the y hat values, as well as columns for the uncertainty intervals. Depending on your computer, this step should take around 1min. (If you want to speed things up to a couple of seconds, remove the AutoModels like ARIMA and Theta)

# Prediction
Y_hat = sf.forecast(horizon, fitted=True)

Y_hat
dsAutoRegressive
unique_id
12012-01-0115904.990234
12012-02-0113597.334961
12012-03-0115488.281250
12012-10-0114086.938477
12012-11-0113273.106445
12012-12-0112497.278320
values=sf.forecast_fitted_values()
values.head()
dsyAutoRegressive
unique_id
11986-01-019034.021024.435547
11986-02-019596.0NaN
11986-03-0110558.0NaN
11986-04-019002.0129223.968750
11986-05-019239.010019.207031

Adding 95% confidence interval with the forecast method

sf.forecast(h=horizon, level=[95])
dsAutoRegressiveAutoRegressive-lo-95AutoRegressive-hi-95
unique_id
12012-01-0115904.9902341759.57421930050.406250
12012-02-0113597.334961-548.08087227742.751953
12012-03-0115488.2812501342.86499029633.697266
12012-10-0114086.938477-1445.71972729619.597656
12012-11-0113273.106445-2283.25610428829.470703
12012-12-0112497.278320-3072.11523428066.671875
Y_hat=Y_hat.reset_index()
Y_hat
unique_iddsAutoRegressive
012012-01-0115904.990234
112012-02-0113597.334961
212012-03-0115488.281250
912012-10-0114086.938477
1012012-11-0113273.106445
1112012-12-0112497.278320
# Merge the forecasts with the true values
test['unique_id'] = test['unique_id'].astype(int)
Y_hat1 = test.merge(Y_hat, how='left', on=['unique_id', 'ds'])
Y_hat1
dsyunique_idAutoRegressive
02012-01-0113427115904.990234
12012-02-0114447113597.334961
22012-03-0114717115488.281250
92012-10-0113795114086.938477
102012-11-0113352113273.106445
112012-12-0112716112497.278320
# Merge the forecasts with the true values

fig, ax = plt.subplots(1, 1)
plot_df = pd.concat([train, Y_hat1]).set_index('ds')
plot_df[['y', "AutoRegressive"]].plot(ax=ax, linewidth=2)
ax.set_title(' Forecast', fontsize=22)
ax.set_ylabel('Year ', fontsize=20)
ax.set_xlabel('Timestamp [t]', fontsize=20)
ax.legend(prop={'size': 15})
ax.grid(True)
plt.show()

Predict method with confidence interval

To generate forecasts use the predict method.

The predict method takes two arguments: forecasts the next h (for horizon) and level.

  • h (int): represents the forecast h steps into the future. In this case, 12 months ahead.

  • level (list of floats): this optional parameter is used for probabilistic forecasting. Set the level (or confidence percentile) of your prediction interval. For example, level=[95] means that the model expects the real value to be inside that interval 95% of the times.

The forecast object here is a new data frame that includes a column with the name of the model and the y hat values, as well as columns for the uncertainty intervals.

This step should take less than 1 second.

sf.predict(h=horizon)
dsAutoRegressive
unique_id
12012-01-0115904.990234
12012-02-0113597.334961
12012-03-0115488.281250
12012-10-0114086.938477
12012-11-0113273.106445
12012-12-0112497.278320
forecast_df = sf.predict(h=horizon, level=[95]) 

forecast_df
dsAutoRegressiveAutoRegressive-lo-95AutoRegressive-hi-95
unique_id
12012-01-0115904.9902341759.57421930050.406250
12012-02-0113597.334961-548.08087227742.751953
12012-03-0115488.2812501342.86499029633.697266
12012-10-0114086.938477-1445.71972729619.597656
12012-11-0113273.106445-2283.25610428829.470703
12012-12-0112497.278320-3072.11523428066.671875

We can join the forecast result with the historical data using the pandas function pd.concat(), and then be able to use this result for graphing.

pd.concat([df, forecast_df]).set_index('ds')
yunique_idAutoRegressiveAutoRegressive-lo-95AutoRegressive-hi-95
ds
1986-01-019034.01NaNNaNNaN
1986-02-019596.01NaNNaNNaN
1986-03-0110558.01NaNNaNNaN
2012-10-01NaNNaN14086.938477-1445.71972729619.597656
2012-11-01NaNNaN13273.106445-2283.25610428829.470703
2012-12-01NaNNaN12497.278320-3072.11523428066.671875

Now let’s visualize the result of our forecast and the historical data of our time series, also let’s draw the confidence interval that we have obtained when making the prediction with 95% confidence.

def plot_forecasts(y_hist, y_true, y_pred, models):
    _, ax = plt.subplots(1, 1, figsize = (20, 7))
    y_true = y_true.merge(y_pred, how='left', on=['unique_id', 'ds'])
    df_plot = pd.concat([y_hist, y_true]).set_index('ds').tail(12*10)
    df_plot[['y'] + models].plot(ax=ax, linewidth=2)
    colors = ['orange', 'black', 'green']
    for model, color in zip(models, colors):
        ax.fill_between(df_plot.index, 
                        df_plot[f'{model}-lo-95'], 
                        df_plot[f'{model}-hi-95'],
                        alpha=.35,
                        color=color,
                        label=f'{model}-level-90')
    ax.set_title('', fontsize=22)
    ax.set_ylabel('', fontsize=20)
    ax.set_xlabel('Timestamp [t]', fontsize=20)
    ax.legend(prop={'size': 15})
    ax.grid(True)
    plt.show()
plot_forecasts(train, test, forecast_df, models=["AutoRegressive"])

Let’s plot the same graph using the plot function that comes in Statsforecast, as shown below.

Cross-validation

In previous steps, we’ve taken our historical data to predict the future. However, to asses its accuracy we would also like to know how the model would have performed in the past. To assess the accuracy and robustness of your models on your data perform Cross-Validation.

With time series data, Cross Validation is done by defining a sliding window across the historical data and predicting the period following it. This form of cross-validation allows us to arrive at a better estimation of our model’s predictive abilities across a wider range of temporal instances while also keeping the data in the training set contiguous as is required by our models.

The following graph depicts such a Cross Validation Strategy:

Perform time series cross-validation

Cross-validation of time series models is considered a best practice but most implementations are very slow. The statsforecast library implements cross-validation as a distributed operation, making the process less time-consuming to perform. If you have big datasets you can also perform Cross Validation in a distributed cluster using Ray, Dask or Spark.

In this case, we want to evaluate the performance of each model for the last 5 months (n_windows=5), forecasting every second months (step_size=12). Depending on your computer, this step should take around 1 min.

The cross_validation method from the StatsForecast class takes the following arguments.

  • df: training data frame

  • h (int): represents h steps into the future that are being forecasted. In this case, 12 months ahead.

  • step_size (int): step size between each window. In other words: how often do you want to run the forecasting processes.

  • n_windows(int): number of windows used for cross validation. In other words: what number of forecasting processes in the past do you want to evaluate.

crossvalidation_df = sf.cross_validation(df=train,
                                         h=horizon,
                                         step_size=6,
                                         n_windows=5)

The crossvaldation_df object is a new data frame that includes the following columns:

  • unique_id: index. If you dont like working with index just run crossvalidation_df.resetindex()
  • ds: datestamp or temporal index
  • cutoff: the last datestamp or temporal index for the n_windows.
  • y: true value
  • "model": columns with the model’s name and fitted value.
crossvalidation_df
dscutoffyAutoRegressive
unique_id
12009-01-012008-12-0119262.024295.751953
12009-02-012008-12-0120658.023993.804688
12009-03-012008-12-0122660.021200.978516
12011-10-012010-12-0112893.019350.546875
12011-11-012010-12-0111843.016900.646484
12011-12-012010-12-0111321.018160.392578

We’ll now plot the forecast for each cutoff period. To make the plots clearer, we’ll rename the actual values in each period.

crossvalidation_df.rename(columns = {'y' : 'actual'}, inplace = True) # rename actual values 

cutoff = crossvalidation_df['cutoff'].unique()

for k in range(len(cutoff)): 
    cv = crossvalidation_df[crossvalidation_df['cutoff'] == cutoff[k]]
    StatsForecast.plot(df, cv.loc[:, cv.columns != 'cutoff'])

Model Evaluation

We can now compute the accuracy of the forecast using an appropiate accuracy metric. Here we’ll use the Root Mean Squared Error (RMSE). To do this, we first need to install datasetsforecast, a Python library developed by Nixtla that includes a function to compute the RMSE.

!pip install datasetsforecast
from datasetsforecast.losses import rmse

The function to compute the RMSE takes two arguments:

  1. The actual values.
  2. The forecasts, in this case, AutoRegressive.
rmse = rmse(crossvalidation_df['actual'], crossvalidation_df["AutoRegressive"])
print("RMSE using cross-validation: ", rmse)
RMSE using cross-validation:  3566.5479

As you have noticed, we have used the cross validation results to perform the evaluation of our model.

Now we are going to evaluate our model with the results of the predictions, we will use different types of metrics MAE, MAPE, MASE, RMSE, SMAPE to evaluate the accuracy.

from datasetsforecast.losses import (mae, mape, mase, rmse, smape)
def evaluate_performace(y_hist, y_true, y_pred, model):
    y_true = y_true.merge(y_pred, how='left', on=['unique_id', 'ds'])
    evaluation = {}
    evaluation[model] = {}
    for metric in [mase, mae, mape, rmse, smape]:
        metric_name = metric.__name__
        if metric_name == 'mase':
            evaluation[model][metric_name] = metric(y_true['y'].values, 
                                                y_true[model].values, 
                                                y_hist['y'].values, seasonality=12)
        else:
            evaluation[model][metric_name] = metric(y_true['y'].values, y_true[model].values)
    return pd.DataFrame(evaluation).T
evaluate_performace(train, test, Y_hat, model="AutoRegressive")
maemapemasermsesmape
AutoRegressive961.7731937.2714370.6016511194.6605886.970027

Acknowledgements

We would like to thank Naren Castellon for writing this tutorial.

References

  1. Changquan Huang • Alla Petukhina. Springer series (2022). Applied Time Series Analysis and Forecasting with Python.
  2. Jose A. Fiorucci, Tiago R. Pellegrini, Francisco Louzada, Fotios Petropoulos, Anne B. Koehler (2016). “Models for optimising the theta method and their relationship to state space models”. International Journal of Forecasting.
  3. Nixtla Parameters.
  4. Pandas available frequencies.
  5. Rob J. Hyndman and George Athanasopoulos (2018). “Forecasting principles and practice, Time series cross-validation”..
  6. Seasonal periods- Rob J Hyndman.