The autoregressive time series model (AutoRegressive) is a
statistical technique used to analyze and predict univariate time
series. In essence, the autoregressive model is based on the idea that
previous values of the time series can be used to predict future values.In this model, the dependent variable (the time series) returns to
itself at different moments in time, creating a dependency relationship
between past and present values. The idea is that past values can help
us understand and predict future values of the series.The autoregressive model can be fitted to different orders, which
indicate how many past values are used to predict the present value. For
example, an autoregressive model of order 1 (AR(1)) uses only the
immediately previous value to predict the current value, while an
autoregressive model of order p(AR(p)) uses the p previous
values.The autoregressive model is one of the basic models of time series
analysis and is widely used in a variety of fields, from finance and
economics to meteorology and social sciences. The model’s ability to
capture nonlinear dependencies in time series data makes it especially
useful for forecasting and long-term trend analysis.In a multiple regression model, we forecast the variable of interest
using a linear combination of predictors. In an autoregression model,
we forecast the variable of interest using a linear combination of
past values of the variable. The term autoregression indicates that it
is a regression of the variable against itself.
Before giving a formal definition of the ARCH model, let’s define the
components of an ARCH model in a general way:
Autoregressive, a concept that we have already known, is the
construction of a univariate time series model using statistical
methods, which means that the current value of a variable is
influenced by past values of itself in different periods.
Heteroscedasticity means that the model can have different
magnitudes or variability at different time points (variance changes
over time).
Conditional, since volatility is not fixed, the reference here is
the constant that we put in the model to limit heteroscedasticity
and make it conditionally dependent on the previous value or values
of the variable.
The AR model is the most basic building block of univariate time series.
As you have seen before, univariate time series are a family of models
that use only information about the target variable’s past to forecast
its future, and do not rely on other explanatory variables.Definition 1. (1) The following equation is called the
autoregressive model of order p and denoted by AR(p):Xt=φ0+φ1Xt−1+φ2Xt−2+⋯+φpXt−p+εt(1)where {εt}∼WN(0,σϵ2),
E(Xsεt)=0 if s<t and
φ0,φ1,⋯,φp are real-valued parameters
(coefficients) with φp=0.
If a time series {Xt} is stationary and satisfies such an
equation as (1), then we call it an AR(p) process.
Note the following remarks about this definition:
For simplicity, we often assume that the intercept (const term)
φ0=0; otherwise, we can consider {Xt−μ} where
μ=φ0/(1−φ1−⋯−φp).
We distinguish the concept of AR models from the concept of
AR processes. AR models may or may not be
stationary and AR processes must be stationary.
E(Xsεt)=0(s<t) means that Xs in the past has
nothing to do with εt at the current time t.
Like the definition of MA models, sometimes εt in Eq.(1) is called
the innovation or shock term.
In addition, using the
backshift(see) operator B,
the AR(p) model can be rewritten asφ(B)Xt=εtwhere φ(z)=1−φ1z−⋯−φpzp is called
the (corresponding) AR polynomial. Besides, in the Python
package |StatsModels|, φ(B) is called the AR lag
polynomial.
Let {Xt} be a stationary time series with E(Xt)=0. Here the
assumption E(Xt)=0 is for conciseness only. If
E(Xt)=μ=0, it is okay to replace {Xt} by
{Xt−μ}. Now consider the linear regression (prediction) of
Xt on {Xt−k+1:t−1} for any integer k≥2. We use X^t
to denote this regression (prediction):X^t=α1Xt−1+⋯+αk−1Xt−k+1where {α1,⋯,αk−1} satisfy{α1,⋯,αk−1}=argminβ1,⋅⋅⋅,βk−1E[Xt−(β1Xt−1+⋯+βk−1Xt−k+1)]2That is, {α1,⋯,αk−1} are chosen by minimizing
the mean squared error of prediction. Similarly, let X^t−k
denote the regression (prediction) of Xt−k on
{Xt−k+1:t−1}:X^t−k=η1Xt−1+⋯+ηk−1Xt−k+1Note that if {Xt} is stationary, then
{α1:k−1}={η1:k−1}. Now let
Z^t−k=Xt−k−X^t−k and Z^t=Xt−X^t.
Then Z^t−k is the residual of removing the effect of the
intervening variables {Xt−k+1:t−1} from Xt−k, and
Z^t is the residual of removing the effect of
{Xt−k+1:t−1} from Xt.Definition 2. The partial autocorrelation function(PACF) at lag k
of astationary time series {Xt} with E(Xt)=0 isϕ11=Corr(Xt−1,Xt)=[Var(Xt−1)Var(Xt)]1/2Cov(Xt−1,Xt)=ρ1andϕkk=Corr(Z^t−k,Z^t)=[Var(Z^t−k)Var(Z^t)]1/2Cov(Z^t−k,Z^t)According to the property of correlation coefficient (see, e.g., P172,
Casella and Berger 2002), |φkk| ≤ 1. On the other hand, the following
theorem paves the way to estimate the PACF of a stationary time series,
and its proof can be seen in Fan and Yao (2003).On the other hand, the following theorem paves the way to estimate the
PACF of a stationary time series, and its proof can be seen in Fan and
Yao (2003).Theorem 1. Let {Xt} be a stationary time series with
E(Xt)=0, and {a1k,⋯,akk} satisfy{a1k,⋯,akk}=argmina1,⋯,akE(Xt−a1Xt−1−⋯−akXt−k)2Then ϕkk=akk for k≥1.
From the AR(p) model, namely, Eq. (1), we can see that it is in
the same form as the multiple linear regression model. However, it
explains current itself with its own past. Given the past{X(t−p):(t−1)}={x(t−p):(t−1)}we have
E(Xt∣X(t−p):(t−1))=φ0+φ1xt−1+φ2xt−2+⋯+φpxt−pThis suggests that given the past, the right-hand side of this equation
is a good estimate of Xt . BesidesVar(Xt∣X(t−p):(t−1))=Var(εt)=σε2Now we suppose that the AR(p) model, namely, Eq. (1), is stationary;
then we have
The model mean E(Xt)=μ=φ0/(1−φ1−⋅⋅⋅−φp)
.Thus,themodelmean μ=0 if and only if φ0=0.
If the mean is zero or φ0=0 ((3) and (4) below have the
same assumption), noting that
E(Xtεt)=σε2 , we multiply
Eq. (1) by Xt , take expectations, and then get
For all k>p, the partial autocorrelation ϕkk=0, that
is, the PACF of AR(p) models cuts off after lag p, which
is very helpful in identifying an AR model. In fact, at
this point, the predictor or regression of Xt on
{Xt−k+1:t−1} is
X^t=φ1Xt−1+⋯+φk−1Xt−k+1Thus, Xt−X^t=εt. Moreover,
Xt−k−X^t−k is a function of {Xt−k:t−1}, and
εt is uncorrelated to everyone in {Xt−k:t−1}.
ThereforeCov(Xt−k−X^t−k,Xt−X^t)=Cov(Xt−k−X^t−k,εt)=0.By Definition 2, ϕkk=0.
We multiply Eq.(1)by Xt−k,take expectations,divide by
γ0,and then obtain the recursive relationship between the
autocorrelations:
fork≥1,ρk=φ1ρk−1+φ2ρk−2+⋯+φpρk−p(2)For Eq.(2), let k=1,2,⋅⋅⋅,p. Then we arrive at a set of difference
equations, which is known as the Yule-Walker equations. If the
ACF{ρ1:p} are given, then we can solve the
Yule-Walker equations to obtain the estimates for {φ1:p},
and the solutions are called the Yule-Walker estimates.
Since the model is a stationary AR(p) now, naturally it
satisfies
Xt=φ1Xt−1+φ2Xt−2+⋯+φpXt−p+εt.
Hence ϕpp=φp. If the AR(p) model is
further Gaussian and a sample of size T is given, then (a)
ϕ^pp→φp as T→∞; (b) according to Quenouille
(1949), for k>p,Tϕ^kk asymptotically follows
the standard normal(Gaussian) distribution N(0,1), or
ϕkk is asymptotically distributed as N(0, 1/T ).
Consider the AR(1) model:Xt=φXt−1+εt,εt∼WN(0,σε2)(3)For ∣φ∣<1,let
X1t=∑j=0∞φjεt−j and for
∣φ∣>1,let
X2t=−∑j=1∞φ−jεt+j. It is
easy to show that both {X1t} and {X2t} are stationary
and satisfy Eq. (3). That is, both are the stationary solution of Eq.
(3). This gives rise to a question: which one of both is preferable?
Obviously, {X2t} depends on future values of unobservable
{εt}, and so it is unnatural. Hence we take
{X1t} and abandon {X2t}. In other words, we require
that the coefficient φ in Eq. (3) is less 1 in absolute value.
At this point, the AR(1) model is said to be causal and its
causal expression is
Xt=∑j=0∞φjεt−j. In general, the
definition of causality is given below.Definition 3 (1) A time series {Xt} is causal if there exist
coefficients ψj such thatXt=∑j=0∞ψjεt−j,∑j=0∞∣ψj∣<∞where
ψ0=1,{εt}∼WN(0,σε2). At
this point, we say that the time series {Xt} has an
MA(∞) representation.
We say that a model is causal if the time series generated by it is
causal.
Causality suggests that the time series {Xt} is caused by the white
noise (or innovations) from the past up to time t . Besides, the time
series {X2t} is an example that is stationary but not causal.
In order to determine whether an AR model is causal, similar to
the invertibility for the MA model, we have the following
theorem.Theorem 2(CausalityTheorem) An AR model defined by Eq.(1)
is causal if and only if the roots of its AR polynomial
φ(z)=1−φ1z−⋯−φpzp exceed 1 in modulus
or lie outside the unit circle on the complex plane.Note the following remarks: * In the light of the existence and
uniqueness on page 75 of Brockwell and Davis (2016), an AR
model defined by Eq.(1) is stationary if and only if its AR
polynomial φ(z)=1−φ1z−⋯−φpzp=0 for
all ∣z∣=1 or all the roots of the AR polynomial do not lie on
the unit circle. Hence for the AR model defined by Eq. (1), its
stationarity condition is weaker than its causality condition.
A causal time series is surely a stationary one. So an AR
model that satisfies the causal condition is naturally stationary.
But a stationary AR model is not necessarily causal.
If the time series {Xt} generated by Eq. (1) is not from the
remote past, namely,
t∈T=⋯,−n,⋯,−1,0,1,⋯,n,⋯
but starts from an initial value X0, then it may be nonstationary,
not to mention causality.
According to the relationship between the roots and the coefficients
of the degree 2 polynomial
φ(z)=1−φ1z−φ2z2, it may be proved
that both of the roots of the polynomial exceed 1 in modulus if and
only if
Thus, we can conveniently use the three inequations to decide whether a
AR(2) model is causal or not.
It may be shown that for an AR(p) model defined by Eq. (1),
the coefficients {ψj} in Definition 3 satisfy ψ0=1
and
The autoregressive model describes a relationship between the present of
a variable and its past. Therefore, it is suitable for variables in
which the past and present values are correlated.As an intuitive example, consider the waiting line at the doctor.
Imagine that the doctor has a plan in which each patient has 20 minutes
with him. If each patient takes exactly 20 minutes, this works well. But
what if a patient takes a little longer? An autocorrelation could be
present if the duration of one query has an impact on the duration of
the next query. So if the doctor needs to speed up an appointment
because the previous appointment took too long, look at a correlation
between the past and the present. Past values influence future values.
Like “regular” correlation, autocorrelation can be positive or negative.
Positive autocorrelation means that a high value now is likely to give a
high value in the next period. This can be observed, for example, in
stock trading: as soon as a lot of people want to buy a stock, its price
goes up. This positive trend makes people want to buy this stock even
more as it has positive returns. The more people buy the stock, the
higher it goes and the more people will want to buy it.A positive correlation also works in downtrends. If today’s stock value
is low, tomorrow’s value is likely to be even lower as people start
selling. When many people sell, the value falls, and even more people
will want to sell. This is also a case of positive autocorrelation since
the past and the present go in the same direction. If the past is low,
the present is low; and if the past is high, the present is high.There is negative autocorrelation if two trends are opposite. This is
the case in the example of the duration of the doctor’s visit. If one
query takes longer, the next one will be shorter. If one visit takes
less time, the doctor may take a little longer for the next one.
The problem of having a trend in our data is general in univariate time
series modeling. The stationarity of a time series means that a time
series does not have a (long-term) trend: it is stable around the same
average. Otherwise, a time series is said to be non-stationary.In theory, AR models can have a trend coefficient in the model, but
since stationarity is an important concept in general time series
theory, it’s best to learn to deal with it right away. Many models can
only work on stationary time series.A time series that is growing or falling strongly over time is obvious
to spot. But sometimes it’s hard to tell if a time series is stationary.
This is where the Augmented Dickey Fuller (ADF) test comes in handy.
An Augmented Dickey-Fuller (ADF) test is a type of statistical test that
determines whether a unit root is present in time series data. Unit
roots can cause unpredictable results in time series analysis. A null
hypothesis is formed in the unit root test to determine how strongly
time series data is affected by a trend. By accepting the null
hypothesis, we accept the evidence that the time series data is not
stationary. By rejecting the null hypothesis or accepting the
alternative hypothesis, we accept the evidence that the time series data
is generated by a stationary process. This process is also known as
stationary trend. The values of the ADF test statistic are negative.
Lower ADF values indicate a stronger rejection of the null hypothesis.Augmented Dickey-Fuller Test is a common statistical test used to test
whether a given time series is stationary or not. We can achieve this by
defining the null and alternate hypothesis.Null Hypothesis: Time Series is non-stationary. It gives a
time-dependent trend. Alternate Hypothesis: Time Series is stationary.
In another term, the series doesn’t depend on time.ADF or t Statistic < critical values: Reject the null hypothesis, time
series is stationary. ADF or t Statistic > critical values: Failed to
reject the null hypothesis, time series is non-stationary.Let’s check if our series that we are analyzing is a stationary series.
Let’s create a function to check, using the Dickey Fuller test
from statsmodels.tsa.stattools import adfuller
def Augmented_Dickey_Fuller_Test_func(series , column_name): print (f'Dickey-Fuller test results for columns: {column_name}') dftest = adfuller(series, autolag='AIC') dfoutput = pd.Series(dftest[0:4], index=['Test Statistic','p-value','No Lags Used','Number of observations used']) for key,value in dftest[4].items(): dfoutput['Critical Value (%s)'%key] = value print (dfoutput) if dftest[1] <= 0.05: print("Conclusion:====>") print("Reject the null hypothesis") print("The data is stationary") else: print("Conclusion:====>") print("The null hypothesis cannot be rejected") print("The data is not stationary")
Dickey-Fuller test results for columns: SalesTest Statistic -1.589903p-value 0.488664No Lags Used 14.000000 ... Critical Value (1%) -3.451691Critical Value (5%) -2.870939Critical Value (10%) -2.571778Length: 7, dtype: float64Conclusion:====>The null hypothesis cannot be rejectedThe data is not stationary
In the previous result we can see that the Augmented_Dickey_Fuller
test gives us a p-value of 0.488664, which tells us that the null
hypothesis cannot be rejected, and on the other hand the data of our
series are not stationary.We need to differentiate our time series, in order to convert the data
to stationary.
Dickey-Fuller test results for columns: SalesTest Statistic -4.310935p-value 0.000425No Lags Used 17.000000 ... Critical Value (1%) -3.451974Critical Value (5%) -2.871063Critical Value (10%) -2.571844Length: 7, dtype: float64Conclusion:====>Reject the null hypothesisThe data is stationary
By applying a differential, our time series now is stationary.
As you can see, based on the blue background shaded area of the graph,
the PACF shows the first, second, third, fourth, sixth, seventh, ninth,
and tenth etc. delay outside the shaded area. This means that it would
be interesting to also include these lags in the AR model.
Now, the big question in time series analysis is always how many lags
to include. This is called the order of the time series. The notation
is AR(1) for order 1 and AR(p) for order p.The order is up to you. Theoretically speaking, you can base your order
on the PACF chart. Theory tells you to take the number of lags before
you get an autocorrelation of 0. All other lags should be 0.In theory, you often see great charts where the first peak is very high
and the rest equal zero. In those cases, the choice is easy: you are
working with a very “pure” example of AR(1). Another common case is when
your autocorrelation starts high and slowly decreases to zero. In this
case, you should use all delays where the PACF is not yet zero.However, in practice, it is not always that simple. Remember the famous
saying “all models are wrong, but some are useful”. It is very rare to
find cases that fit an AR model perfectly. In general, the
autoregression process can help explain part of the variation of a
variable, but not all.In practice, you will try to select the number of lags that gives your
model the best predictive performance. The best predictive performance
is often not defined by looking at autocorrelation plots: those plots
give you a theoretical estimate. However, predictive performance is best
defined by model evaluation and benchmarking, using the techniques you
have seen in Module 2. Later in this module, we will see how to use
model evaluation to choose a performance order for the AR model. But
before we get into that, it’s time to dig into the exact definition of
the AR model.
Let’s divide our data into sets 1. Data to train our AutoRegressive
model 2. Data to test our modelFor the test data we will use the last 12 months to test and evaluate
the performance of our model.
Import and instantiate the models. Setting the argument is sometimes
tricky. This article on Seasonal
periods by the
master, Rob Hyndmann, can be useful.season_length.Method 1: We use the lags parameter in an integer format, that is,
we put the lags we want to evaluate in the model.
season_length = 12 # Monthly datahorizon = len(test) # number of predictions biasadj=True, include_drift=True,models2 = [AutoRegressive(lags=[14], include_mean=True)]
Method 2: We use the lags parameter in a list format, that is, we
put the lags that we want to evaluate in the model in the form of a list
as shown below.
season_length = 12 # Monthly datahorizon = len(test) # number of predictionsmodels = [AutoRegressive(lags=[3,4,6,7,9,10,11,12,13,14], include_mean=True)]
We fit the models by instantiating a new StatsForecast object with the
following parameters:models: a list of models. Select the models you want from models and
import them.
Let us now visualize the residuals of our models.As we can see, the result obtained above has an output in a dictionary,
to extract each element from the dictionary we are going to use the
.get() function to extract the element and then we are going to save
it in a pd.DataFrame().
If you want to gain speed in productive settings where you have multiple
series or models we recommend using the StatsForecast.forecast method
instead of .fit and .predict.The main difference is that the .forecast doest not store the fitted
values and is highly scalable in distributed environments.The forecast method takes two arguments: forecasts next h (horizon)
and level.
h (int): represents the forecast h steps into the future. In this
case, 12 months ahead.
level (list of floats): this optional parameter is used for
probabilistic forecasting. Set the level (or confidence percentile)
of your prediction interval. For example, level=[90] means that
the model expects the real value to be inside that interval 90% of
the times.
The forecast object here is a new data frame that includes a column with
the name of the model and the y hat values, as well as columns for the
uncertainty intervals. Depending on your computer, this step should take
around 1min. (If you want to speed things up to a couple of seconds,
remove the AutoModels like ARIMA and Theta)
To generate forecasts use the predict method.The predict method takes two arguments: forecasts the next h (for
horizon) and level.
h (int): represents the forecast h steps into the future. In this
case, 12 months ahead.
level (list of floats): this optional parameter is used for
probabilistic forecasting. Set the level (or confidence percentile)
of your prediction interval. For example, level=[95] means that
the model expects the real value to be inside that interval 95% of
the times.
The forecast object here is a new data frame that includes a column with
the name of the model and the y hat values, as well as columns for the
uncertainty intervals.This step should take less than 1 second.
In previous steps, we’ve taken our historical data to predict the
future. However, to asses its accuracy we would also like to know how
the model would have performed in the past. To assess the accuracy and
robustness of your models on your data perform Cross-Validation.With time series data, Cross Validation is done by defining a sliding
window across the historical data and predicting the period following
it. This form of cross-validation allows us to arrive at a better
estimation of our model’s predictive abilities across a wider range of
temporal instances while also keeping the data in the training set
contiguous as is required by our models.The following graph depicts such a Cross Validation Strategy:
Cross-validation of time series models is considered a best practice but
most implementations are very slow. The statsforecast library implements
cross-validation as a distributed operation, making the process less
time-consuming to perform. If you have big datasets you can also perform
Cross Validation in a distributed cluster using Ray, Dask or Spark.In this case, we want to evaluate the performance of each model for the
last 5 months (n_windows=5), forecasting every second months
(step_size=12). Depending on your computer, this step should take
around 1 min.The cross_validation method from the StatsForecast class takes the
following arguments.
df: training data frame
h (int): represents h steps into the future that are being
forecasted. In this case, 12 months ahead.
step_size (int): step size between each window. In other words:
how often do you want to run the forecasting processes.
n_windows(int): number of windows used for cross validation. In
other words: what number of forecasting processes in the past do you
want to evaluate.
Now we are going to evaluate our model with the results of the
predictions, we will use different types of metrics MAE, MAPE, MASE,
RMSE, SMAPE to evaluate the accuracy.
from functools import partialimport utilsforecast.losses as uflfrom utilsforecast.evaluation import evaluate