Table of Contents

Introduction

Exponential smoothing has been one of the most popular forecasting methods used to support various decisions in organizations, in activities such as inventory management, scheduling, revenue management, and other areas. Although its relative simplicity and transparency have made it very attractive for research and practice, identifying the underlying trend remains challenging with significant impact on the resulting accuracy. This has resulted in the development of various modifications of trend models, introducing a model selection problem. With the aim of addressing this problem, we propose the complex exponential smoothing (CES), based on the theory of functions of complex variables. The basic CES approach involves only two parameters and does not require a model selection procedure. Despite these simplifications, CES proves to be competitive with, or even superior to existing methods. We show that CES has several advantages over conventional exponential smoothing models: it can model and forecast both stationary and non-stationary processes, and CES can capture both level and trend cases, as defined in the conventional exponential smoothing classification. CES is evaluated on several forecasting competition datasets, demonstrating better performance than established benchmarks. We conclude that CES has desirable features for time series modeling and opens new promising avenues for research.

Complex Exponential Smoothing

Method and model

Using the complex valued representation of time series, we propose the CES in analogy to the conventional exponential smoothing methods. Consider the simple exponential smoothing method:

y^t=αyt1+(1α)y^t1 \begin{equation} {\hat{y}}_t=\alpha {y}_{t-1}+\left(1-\alpha \right){\hat{y}}_{t-1} \tag{1} \end{equation}

where α\alpha is the smoothing parameter and y^t{\hat{y}}_t is the estimated value of series. The same method can be represented as a weighted average of previous actual observations if we substitute y^t1{\hat{y}}_{t-1} by the formula (1) with an index instead of (Brown, 1956):

y^t=αj=1t1(1α)j1ytj \begin{equation} {\hat{y}}_t=\alpha \sum \limits_{j=1}^{t-1}{\left(1-\alpha \right)}^{j-1}{y}_{t-j} \tag{2} \end{equation}

The idea of this representation is to demonstrate how the weights α(1α)j1\alpha {\left(1-\alpha \right)}^{j-1} are distributed over time in our sample. If the smoothing parameter α(0,1)\alpha \in \left(0,1\right) then the weights decline exponentially with the increase of . If it lies in the so called “admissible bounds” (Brenner et al., 1968), that is α(0,2)\alpha \in \left(0,2\right) then the weights decline in oscillating manner. Both traditional and admissible bounds have been used efficiently in practice and in academic literature (for application of the latter see for example Gardner & Diaz-Saiz, 2008; Snyder et al., 2017). However, in real life the distribution of weights can be more complex, with harmonic rather than exponential decline, meaning that some of the past observation might have more importance than the recent ones. In order to implement such distribution of weights, we build upon (2) and introduce complex dynamic interactions by substituting the real variables with the complex ones in (2). First, we substitute ytj{y}_{t-j} by the complex variable ytj+ietj{y}_{t-j}+{ie}_{t-j}, where et{e}_t is the error term of the model and ii is the imaginary unit (which satisfies the equation i2=1{i}^2=-1). The idea behind this is to have the impact of both actual values and the error on each observation in the past on the final forecast. Second, we substitute α\alpha with a complex variable α0+iα1{\alpha}_0+i{\alpha}_1 and 1 by 1+i1+i to introduce the harmonically declining weights. Depending on the values of the complex smoothing parameter, the weights distribution will exhibit a variety of trajectories over time, including exponential, oscillating, and harmonic. Finally, the result of multiplication of two complex numbers will be another complex number, so we substitute y^tj{\hat{y}}_{t-j} with y^tj+ie^tj{\hat{y}}_{t-j}+i{\hat{e}}_{t-j}, where e^tj{\hat{e}}_{t-j} is the proxy for the error term. The CES obtained as a result of this can be written as:

y^t+ie^t=(α0+iα1)j=1t1(1+i(α0+iα1))j1(ytj+ietj) \begin{equation} {\hat{y}}_t+i{\hat{e}}_t=\left({\alpha}_0+i{\alpha}_1\right)\sum \limits_{j=1}^{t-1}{\left(1+i-\left({\alpha}_0+i{\alpha}_1\right)\right)}^{j-1}\left({y}_{t-j}+{ie}_{t-j}\right) \tag{3} \end{equation}

Having arrived to the model with harmonically distributed weights, we can now move to the shorter form by substituting

y^t1+ie^t1=(α0+iα1)j=2t1(1+i(α0+iα1))j1(ytj+ietj){\displaystyle \begin{array}{cc}& {\hat{y}}_{t-1}+i{\hat{e}}_{t-1}\\ {}& \kern1em =\left({\alpha}_0+i{\alpha}_1\right)\sum \limits_{j=2}^{t-1}{\left(1+i-\left({\alpha}_0+i{\alpha}_1\right)\right)}^{j-1}\left({y}_{t-j}+{ie}_{t-j}\right)\end{array}}

in (3) to get:

y^t+ie^t=(α0+iα1)(yt1+iet1)+(1α0+iiα1)(y^t1+ie^t1). \begin{equation} {\displaystyle \begin{array}{cc}{\hat{y}}_t+i{\hat{e}}_t& =\left({\alpha}_0+i{\alpha}_1\right)\left({y}_{t-1}+{ie}_{t-1}\right)\\ {}& \kern1em +\left(1-{\alpha}_0+i-i{\alpha}_1\right)\left({\hat{y}}_{t-1}+i{\hat{e}}_{t-1}\right).\end{array}} \tag 4 \end{equation}

Note that e^t{\hat{e}}_t is not interesting for the time series analysis and forecasting purposes, but is used as a vessel containing the information about the previous errors of the method. Having the complex variables instead of the real ones in (4), allows taking the exponentially weighted values of both actuals and the forecast errors. By changing the value of α0+iα1{\alpha}_0+i{\alpha}_1 , we can regulate what proportions of the actual and the forecast error should be carried out to the future in order to produce forecasts.

Representing the complex-valued function as a system of two real-valued functions leads to:

y^t=(α0yt1+(1α0)y^t1)(α1et1+(1α1)e^t1)e^t=(α1yt1+(1α1)y^t1)+(α0et1+(1α0)e^t1). \begin{equation} {\displaystyle \begin{array}{ll}& {\hat{y}}_t=\left({\alpha}_0{y}_{t-1}+\left(1-{\alpha}_0\right){\hat{y}}_{t-1}\right)-\left({\alpha}_1{e}_{t-1}+\left(1-{\alpha}_1\right){\hat{e}}_{t-1}\right)\\ {}& {\hat{e}}_t=\left({\alpha}_1{y}_{t-1}+\left(1-{\alpha}_1\right){\hat{y}}_{t-1}\right)+\left({\alpha}_0{e}_{t-1}+\left(1-{\alpha}_0\right){\hat{e}}_{t-1}\right).\end{array}} \tag 5 \end{equation}

CES introduces an interaction between the real and imaginary parts, and the equations in (6) are connected via the previous values of each other, causing interactions over time, defined by complex smoothing parameter value.

But the method itself is restrictive and does not allow easily producing prediction intervals and deriving the likelihood function. It is also important to understand what sort of statistical model underlies CES. This model can be written in the following state space form:

yt=lt1+ϵtlt=lt1(1α1)ct1+(α0α1)ϵtct=lt1+(1α0)ct1+(α0+α1)ϵt, \begin{equation} {\displaystyle \begin{array}{ll}& {y}_t={l}_{t-1}+{\epsilon}_t\\ {}& {l}_t={l}_{t-1}-\left(1-{\alpha}_1\right){c}_{t-1}+\left({\alpha}_0-{\alpha}_1\right){\epsilon}_t\\ {}& {c}_t={l}_{t-1}+\left(1-{\alpha}_0\right){c}_{t-1}+\left({\alpha}_0+{\alpha}_1\right){\epsilon}_t,\end{array}} \tag 6 \end{equation}

where ϵt{\epsilon}_t is the white noise error term, lt{l}_t is the level component and ct{c}_t is the nonlinear trend component at observation . Observe that dependencies in time series have an interactive structure and no explicit trend component is present in the time series as this model does not need to artificially break the series into level and trend, as ETS does. Although we call the ct{c}_t component as “nonlinear trend,” it does not correspond to the conventional trend component, because it contains the information of both previous ct1{c}_{t-1} and the level lt1{l}_{t-1}. Also, note that we use ϵt{\epsilon}_t instead of et{e}_t in (6), which means that the CES has (6) as an underlying statistical model only when there is no misspecification error. In the case of the estimation of this model, the ϵt{\epsilon}_t will be substituted by et{e}_t, which will then lead us to the original formulation (4).

This idea allows rewriting (6) in a shorter more generic way, resembling the general single source of error (SSOE) state space framework:

yt=wvt1+ϵtvt=Fvt1+gϵt, \begin{equation} {\displaystyle \begin{array}{ll}& {y}_t={\mathbf{w}}^{\prime }{\mathbf{v}}_{t-1}+{\epsilon}_t\\ {}& {\mathbf{v}}_t={\mathbf{Fv}}_{t-1}+\mathbf{g}{\epsilon}_t,\end{array}} \tag 7 \end{equation}

where vt=(ltct){\mathbf{v}}_t=\left(\begin{array}{c}{l}_t\\ {}{c}_t\end{array}\right) is the state vector

F=(1(1α1)11α0)\mathbf{F}=\left(\begin{array}{cc}1& -\left(1-{\alpha}_1\right)\\ {}1& 1-{\alpha}_0\end{array}\right) is the transition matrix

g=(α0α1α0+α1)\mathbf{g}=\left(\begin{array}{c}{\alpha}_0-{\alpha}_1\\ {}{\alpha}_0+{\alpha}_1\end{array}\right) is the persistence vector and

w=(10)\mathbf{w}=\left(\begin{array}{c}1\\ {}0\end{array}\right) is the measurement vector.

The state space form (7) permits extending CES in a similar ways to ETS to include additional states for seasonality or exogenous variables. The main difference between model (7) and the conventional ETS is that the transition matrix in (7) includes smoothing parameters which is not a standard feature of ETS models. Furthermore persistence vector includes the interaction of complex smoothing parameters, rather than smoothing parameters themselves.

The error term in (6) is additive, so the likelihood function for CES is trivial and is similar to the one in the additive exponential smoothing models (Hyndman et al., 2008, p. 68):

L(g,v0,σ2Y)=(1σ2π)Texp(12t=1T(ϵtσ)2), \begin{equation} \mathrm{\mathcal{L}}\left(\mathbf{g},{\mathbf{v}}_0,{\sigma}^2\mid \mathbf{Y}\right)={\left(\frac{1}{\sigma \sqrt{2\pi }}\right)}^T\exp \left(-\frac{1}{2}\sum \limits_{t=1}^T{\left(\frac{\epsilon_t}{\sigma}\right)}^2\right), \tag 8 \end{equation}

where v0{\mathbf{v}}_0 is the vector of initial states, σ2{\sigma}^2 is the variance of the error term and Y\mathbf{Y} is the vector of all the in-sample observations.

Stationarity and stability conditions for CES

In order to understand the properties of CES, we need to study its stationarity and stability conditions. The former holds for general exponential smoothing in the state space form (8) when all the eigenvalues of lie inside the unit circle (Hyndman et al., 2008, p. 38). CES can be either stationary or not, depending on the complex smoothing parameter value, in contrast to ETS models that are always non-stationary. Calculating eigenvalues of for CES gives the following roots:

λ=2α0±α02+4α142. \begin{equation} \lambda =\frac{2-{\alpha}_0\pm \sqrt{\alpha_0^2+4{\alpha}_1-4}}{2}. \tag 9 \end{equation}

If the absolute values of both roots are less than 1 then the estimated CES is stationary.

When α1>1{\alpha}_1>1 one of the eigenvalues will always be greater than one. In this case both eigenvalues will be real numbers and CES produces a non-stationary trajectory. When α1=1{\alpha}_1=1 CES becomes equivalent to ETS(A,N,N). Finally, the model becomes stationary when:

{α1<52α0α1<1α1>1α0 \begin{equation} \left\{\begin{array}{l}{\alpha}_1<5-2{\alpha}_0\\ {}{\alpha}_1<1\\ {}{\alpha}_1>1-{\alpha}_0\end{array}\right. \tag {10} \end{equation}

Note that we are not restricting CES with the conditions (10), we merely show, how the model will behave depending on the value of the complex smoothing parameter. This property of CES means that it is able to model either stationary or non-stationary processes, without the need to switch between them. The property of CES for each separate time series depends on the value of the smoothing parameters.

The other important property that arises from (7) is the stability condition for CES. With ϵt=ytlt1\epsilon_t={y}_t-{l}_{t-1} the following is obtained:

yt=lt1+ϵt(ltct)=(1α0+α1(1α1)1α0α11α0)(lt1ct1)+(α0α1α1+α0)yt. \begin{equation} {\displaystyle \begin{array}{ll}{y}_t& ={l}_{t-1}+{\epsilon}_t\\ {}\left(\begin{array}{c}{l}_t\\ {}{c}_t\end{array}\right)& =\left(\begin{array}{cc}1-{\alpha}_0+{\alpha}_1& -\left(1-{\alpha}_1\right)\\ {}1-{\alpha}_0-{\alpha}_1& 1-{\alpha}_0\end{array}\right)\left(\begin{array}{c}{l}_{t-1}\\ {}{c}_{t-1}\end{array}\right)\\ {}& \kern1em +\left(\begin{array}{c}{\alpha}_0-{\alpha}_1\\ {}{\alpha}_1+{\alpha}_0\end{array}\right){y}_t.\end{array}} \tag {11} \end{equation}

The matrix D=(1α0+α1(1α1)1α0α11α0)\mathbf{D}=\left(\begin{array}{cc}1-{\alpha}_0+{\alpha}_1& -\left(1-{\alpha}_1\right)\\ {}1-{\alpha}_0-{\alpha}_1& 1-{\alpha}_0\end{array}\right) is called the discount matrix and can be written in the general form:

D=Fgw. \begin{equation} \mathbf{D}=\mathbf{F}-\mathbf{g}{\mathbf{w}}^{\prime }. \tag {12} \end{equation}

The model is said to be stable if all the eigenvalues of (12) lie inside the unit circle. This is more important condition than the stationarity for the model, because it ensures that the complex weights decline over time and that the older observations have smaller weights than the new ones, which is one of the main features of the conventional ETS models. The eigenvalues are given by the following formula:

λ=22α0+α1±8α1+4α04α0α143α122. \begin{equation} \lambda =\frac{2-2{\alpha}_0+{\alpha}_1\pm \sqrt{8{\alpha}_1+4{\alpha}_0-4{\alpha}_0{\alpha}_1-4-3{\alpha}_1^2}}{2}. \tag {13} \end{equation}

CES will be stable when the following system of inequalities is satisfied:

{(α02.5)2+α12>1.25(α00.5)2+(α11)2>0.25(α01.5)2+(α10.5)2<1.5. \begin{equation} \left\{\begin{array}{l}{\left({\alpha}_0-2.5\right)}^2+{\alpha}_1^2>1.25\\ {}{\left({\alpha}_0-0.5\right)}^2+{\left({\alpha}_1-1\right)}^2>0.25\\ {}{\left({\alpha}_0-1.5\right)}^2+{\left({\alpha}_1-0.5\right)}^2<1.5\end{array}.\right. \tag {14} \end{equation}

Both the stationarity and stability regions are shown in Figure 1. The stationarity region (10) corresponds to the triangle. All the combinations of smoothing parameters lying below the curve in the triangle will produce the stationary harmonic trajectories, while the rest lead to the exponential trajectories. The stability condition (14) corresponds to the dark region. The stability region intersects the stationarity region, but in general stable CES can produce both stationary and non-stationary forecasts

Conditional mean and variance of CES

The conditional mean of CES for hh steps ahead with known lt{l}_t and ct{c}_t can be calculated using the state space model (6):

E(yt+hvt)=wFh1vt, \begin{equation} \mathrm{E}\left({y}_{t+h}\mid {\mathbf{v}}_t\right)={\mathbf{w}}^{\prime }{\mathbf{F}}^{h-1}{\mathbf{v}}_t, \tag {15} \end{equation}

where

E(yt+hvt)=y^t+h\mathrm{E}\left({y}_{t+h}\mid {\mathbf{v}}_t\right)={\hat{y}}_{t+h}

while F\mathbf{F} and w\mathbf{w} re the matrices from (7).

The forecasting trajectories of (15) will differ depending on the values of lt,ct{l}_t, {c}_t, and the complex smoothing parameter. The analysis of stationarity condition shows that there are several types of forecasting trajectories of CES depending on the particular value of the complex smoothing parameter:

  1. When α1=1{\alpha}_1=1 all the values of forecast will be equal to the last obtained forecast, which corresponds to a flat line. This trajectory is shown in Figure 2A.
  2. When α1>1{\alpha}_1>1 the model produces trajectory with exponential growth which is shown in Figure 2B.
  3. When 4α024<α1<1\frac{4-{\alpha}_0^2}{4}<{\alpha}_1<1 trajectory becomes stationary and CES produces exponential decline shown in Figure 2C.
  4. When 1α0<α1<4α0241-{\alpha}_0<{\alpha}_1<\frac{4-{\alpha}_0^2}{4} trajectory becomes harmonic and will converge to zero Figure 2D.
  5. Finally, when 0<α1<1α00<{\alpha}_1<1-{\alpha}_0 the diverging harmonic trajectory is produced, the model becomes non-stationary. This trajectory is of no use in forecasting, that is why we do not show it on graphs.

Using (7) the conditional variance of CES for hh steps ahead with known lt{l}_t and ct{c}_t can be calculated similarly to the pure additive ETS models (Hyndman et al., 2008, p. 96).

Loading libraries and data

Tip

Statsforecast will be needed. To install, see instructions.

Next, we import plotting libraries and configure the plotting style.

import pandas as pd

import scipy.stats as stats
import matplotlib.pyplot as plt
import seaborn as sns
from statsmodels.graphics.tsaplots import plot_acf
from statsmodels.graphics.tsaplots import plot_pacf

plt.style.use('fivethirtyeight')
plt.rcParams['lines.linewidth'] = 1.5
dark_style = {
    'figure.facecolor': '#212946',
    'axes.facecolor': '#212946',
    'savefig.facecolor':'#212946',
    'axes.grid': True,
    'axes.grid.which': 'both',
    'axes.spines.left': False,
    'axes.spines.right': False,
    'axes.spines.top': False,
    'axes.spines.bottom': False,
    'grid.color': '#2A3459',
    'grid.linewidth': '1',
    'text.color': '0.9',
    'axes.labelcolor': '0.9',
    'xtick.color': '0.9',
    'ytick.color': '0.9',
    'font.size': 12 }
plt.rcParams.update(dark_style)

from pylab import rcParams
rcParams['figure.figsize'] = (18,7)

Read Data

df = pd.read_csv("https://raw.githubusercontent.com/Naren8520/Serie-de-tiempo-con-Machine-Learning/main/Data/Esperanza_vida.csv", usecols=[1,2])
df.head()
yearvalue
01960-01-0169.123902
11961-01-0169.760244
21962-01-0169.149756
31963-01-0169.248049
41964-01-0170.311707

The input to StatsForecast is always a data frame in long format with three columns: unique_id, ds and y:

  • The unique_id (string, int or category) represents an identifier for the series.

  • The ds (datestamp) column should be of a format expected by Pandas, ideally YYYY-MM-DD for a date or YYYY-MM-DD HH:MM:SS for a timestamp.

  • The y (numeric) represents the measurement we wish to forecast.

df["unique_id"]="1"
df.columns=["ds", "y", "unique_id"]
df.head()
dsyunique_id
01960-01-0169.1239021
11961-01-0169.7602441
21962-01-0169.1497561
31963-01-0169.2480491
41964-01-0170.3117071

Now, let’s now check the last few rows of our time series using the .tail() function.

print(df.dtypes)
ds            object
y            float64
unique_id     object
dtype: object

We need to convert the ds from object type to datetime.

df["ds"] = pd.to_datetime(df["ds"])

Explore data with the plot method

Plot some series using the plot method from the StatsForecast class. This method prints a random series from the dataset and is useful for basic EDA.

from statsforecast import StatsForecast

StatsForecast.plot(df, engine="matplotlib")

Autocorrelation plots

fig, axs = plt.subplots(nrows=1, ncols=2)

plot_acf(df["y"],  lags=20, ax=axs[0],color="fuchsia")
axs[0].set_title("Autocorrelation");

# Grafico
plot_pacf(df["y"],  lags=20, ax=axs[1],color="lime")
axs[1].set_title('Partial Autocorrelation')

#plt.savefig("Gráfico de Densidad y qq")
plt.show();

Decomposition of the time series

How to decompose a time series and why?

In time series analysis to forecast new values, it is very important to know past data. More formally, we can say that it is very important to know the patterns that values follow over time. There can be many reasons that cause our forecast values to fall in the wrong direction. Basically, a time series consists of four components. The variation of those components causes the change in the pattern of the time series. These components are:

  • Level: This is the primary value that averages over time.
  • Trend: The trend is the value that causes increasing or decreasing patterns in a time series.
  • Seasonality: This is a cyclical event that occurs in a time series for a short time and causes short-term increasing or decreasing patterns in a time series.
  • Residual/Noise: These are the random variations in the time series.

Combining these components over time leads to the formation of a time series. Most time series consist of level and noise/residual and trend or seasonality are optional values.

If seasonality and trend are part of the time series, then there will be effects on the forecast value. As the pattern of the forecasted time series may be different from the previous time series.

The combination of the components in time series can be of two types: * Additive * Multiplicative

Additive time series

If the components of the time series are added to make the time series. Then the time series is called the additive time series. By visualization, we can say that the time series is additive if the increasing or decreasing pattern of the time series is similar throughout the series. The mathematical function of any additive time series can be represented by: y(t)=level+Trend+seasonality+noisey(t) = level + Trend + seasonality + noise

Multiplicative time series

If the components of the time series are multiplicative together, then the time series is called a multiplicative time series. For visualization, if the time series is having exponential growth or decline with time, then the time series can be considered as the multiplicative time series. The mathematical function of the multiplicative time series can be represented as.

y(t)=LevelTrendseasonalityNoisey(t) = Level * Trend * seasonality * Noise

from statsmodels.tsa.seasonal import seasonal_decompose 
a = seasonal_decompose(df["y"], model = "add", period=1)
a.plot();

Split the data into training and testing

Let’s divide our data into sets

  1. Data to train our model.
  2. Data to test our model.

For the test data we will use the last 12 months to test and evaluate the performance of our model.

train = df[df.ds<='2013-01-01'] 
test = df[df.ds>'2013-01-01']
train.shape, test.shape
((54, 3), (6, 3))

Now let’s plot the training data and the test data.

sns.lineplot(train,x="ds", y="y", label="Train")
sns.lineplot(test, x="ds", y="y", label="Test")
plt.show()

Implementation of AutoCES with StatsForecast

To also know more about the parameters of the functions of the AutoCES Model, they are listed below. For more information, visit the documentation

model : str
    Controlling state-space-equations.
season_length : int
    Number of observations per unit of time. Ex: 24 Hourly data.
alias : str
    Custom name of the model.
prediction_intervals : Optional[ConformalIntervals]
    Information to compute conformal prediction intervals.
    By default, the model will compute the native prediction
    intervals.

Load libraries

from statsforecast import StatsForecast
from statsforecast.models import AutoCES

from statsforecast.arima import arima_string

Instantiate Model

Import and instantiate the models. Setting the argument is sometimes tricky. This article on Seasonal periods) by the master, Rob Hyndmann, can be useful season_length

Note

Automatically selects the best Complex Exponential Smoothing model using an information criterion. Default is Akaike Information Criterion (AICc), while particular models are estimated using maximum likelihood. The state-space equations can be determined based on their SS simple, PP parial, ZZ optimized or NN ommited components. The model string parameter defines the kind of CES model: NN for simple CES (withous seasonality), SS for simple seasonality (lagged CES), PP for partial seasonality (without complex part), FF for full seasonality (lagged CES with real and complex seasonal parts).

If the component is selected as ZZ, it operates as a placeholder to ask the AutoCES model to figure out the best parameter.

season_length = 1 # year data 
horizon = len(test) # number of predictions

# We call the model that we are going to use
models = [AutoCES(season_length=season_length)]

We fit the models by instantiating a new StatsForecast object with the following parameters:

models: a list of models. Select the models you want from models and import them.

  • freq: a string indicating the frequency of the data. (See pandas’ available frequencies.)

  • n_jobs: n_jobs: int, number of jobs used in the parallel processing, use -1 for all cores.

  • fallback_model: a model to be used if a model fails.

Any settings are passed into the constructor. Then you call its fit method and pass in the historical data frame.

sf = StatsForecast(df=train,
                   models=models,
                   freq='YS', 
                   n_jobs=-1)

Fit the Model

sf.fit()
StatsForecast(models=[CES])
result=sf.fitted_[0,0].model_
print(result.keys())
print(result['fit'])
dict_keys(['loglik', 'aic', 'bic', 'aicc', 'mse', 'amse', 'fit', 'fitted', 'residuals', 'm', 'states', 'par', 'n', 'seasontype', 'sigma2', 'actual_residuals'])
results(x=array([1.63706552, 1.00511519]), fn=76.78049826760919, nit=27, simplex=array([[1.63400329, 1.00510199],
       [1.63706552, 1.00511519],
       [1.63638944, 1.00512037]]))

Let us now visualize the residuals of our models.

As we can see, the result obtained above has an output in a dictionary, to extract each element from the dictionary we are going to use the .get() function to extract the element and then we are going to save it in a pd.DataFrame().

residual=pd.DataFrame(result.get("residuals"), columns=["residual Model"])
residual
residual Model
0-0.727729
10.144552
2-0.762086
51-0.073258
52-0.234578
530.109990
fig, axs = plt.subplots(nrows=2, ncols=2)

# plot[1,1]
residual.plot(ax=axs[0,0])
axs[0,0].set_title("Residuals");

# plot
sns.distplot(residual, ax=axs[0,1]);
axs[0,1].set_title("Density plot - Residual");

# plot
stats.probplot(residual["residual Model"], dist="norm", plot=axs[1,0])
axs[1,0].set_title('Plot Q-Q')

# plot
plot_acf(residual,  lags=35, ax=axs[1,1],color="fuchsia")
axs[1,1].set_title("Autocorrelation");

plt.show();

Forecast Method

If you want to gain speed in productive settings where you have multiple series or models we recommend using the StatsForecast.forecast method instead of .fit and .predict.

The main difference is that the .forecast doest not store the fitted values and is highly scalable in distributed environments.

The forecast method takes two arguments: forecasts next h (horizon) and level.

  • h (int): represents the forecast h steps into the future. In this case, 12 months ahead.

  • level (list of floats): this optional parameter is used for probabilistic forecasting. Set the level (or confidence percentile) of your prediction interval. For example, level=[90] means that the model expects the real value to be inside that interval 90% of the times.

The forecast object here is a new data frame that includes a column with the name of the model and the y hat values, as well as columns for the uncertainty intervals. Depending on your computer, this step should take around 1min. (If you want to speed things up to a couple of seconds, remove the AutoModels like ARIMA and Theta)

# Prediction
Y_hat = sf.forecast(horizon, fitted=True)

Y_hat
dsCES
unique_id
12014-01-0182.906075
12015-01-0183.166687
12016-01-0183.424744
12017-01-0183.685760
12018-01-0183.946213
12019-01-0184.208359
values=sf.forecast_fitted_values()
values.head()
dsyCES
unique_id
11960-01-0169.12390169.851631
11961-01-0169.76024669.615692
11962-01-0169.14975769.911842
11963-01-0169.24804769.657822
11964-01-0170.31170769.601196
StatsForecast.plot(values)

Adding 95% confidence interval with the forecast method

sf.forecast(h=horizon, level=[95])
dsCESCES-lo-95CES-hi-95
unique_id
12014-01-0182.90607582.34248483.454018
12015-01-0183.16668782.60402783.717270
12016-01-0183.42474482.85857483.975868
12017-01-0183.68576083.11894284.239578
12018-01-0183.94621383.37690784.501137
12019-01-0184.20835983.63774184.765411
Y_hat=Y_hat.reset_index()
Y_hat
unique_iddsCES
012014-01-0182.906075
112015-01-0183.166687
212016-01-0183.424744
312017-01-0183.685760
412018-01-0183.946213
512019-01-0184.208359
# Merge the forecasts with the true values
test['unique_id'] = test['unique_id'].astype(int)
Y_hat = test.merge(Y_hat, how='left', on=['unique_id', 'ds'])
Y_hat
dsyunique_idCES
02014-01-0183.090244182.906075
12015-01-0182.543902183.166687
22016-01-0183.243902183.424744
32017-01-0182.946341183.685760
42018-01-0183.346341183.946213
52019-01-0183.197561184.208359
fig, ax = plt.subplots(1, 1)
plot_df = pd.concat([train, Y_hat]).set_index('ds')
plot_df[['y', "CES"]].plot(ax=ax, linewidth=2)
ax.set_title(' Forecast', fontsize=22)
ax.set_ylabel('Year ', fontsize=20)
ax.set_xlabel('Timestamp [t]', fontsize=20)
ax.legend(prop={'size': 15})
ax.grid(True)

Predict method with confidence interval

To generate forecasts use the predict method.

The predict method takes two arguments: forecasts the next h (for horizon) and level.

  • h (int): represents the forecast h steps into the future. In this case, 12 months ahead.

  • level (list of floats): this optional parameter is used for probabilistic forecasting. Set the level (or confidence percentile) of your prediction interval. For example, level=[95] means that the model expects the real value to be inside that interval 95% of the times.

The forecast object here is a new data frame that includes a column with the name of the model and the y hat values, as well as columns for the uncertainty intervals.

This step should take less than 1 second.

sf.predict(h=horizon)
dsCES
unique_id
12014-01-0182.906075
12015-01-0183.166687
12016-01-0183.424744
12017-01-0183.685760
12018-01-0183.946213
12019-01-0184.208359
forecast_df = sf.predict(h=horizon, level=[95]) 

forecast_df
dsCESCES-lo-95CES-hi-95
unique_id
12014-01-0182.90607582.34248483.454018
12015-01-0183.16668782.60402783.717270
12016-01-0183.42474482.85857483.975868
12017-01-0183.68576083.11894284.239578
12018-01-0183.94621383.37690784.501137
12019-01-0184.20835983.63774184.765411

We can join the forecast result with the historical data using the pandas function pd.concat(), and then be able to use this result for graphing.

pd.concat([df, forecast_df]).set_index('ds')
yunique_idCESCES-lo-95CES-hi-95
ds
1960-01-0169.1239021NaNNaNNaN
1961-01-0169.7602441NaNNaNNaN
1962-01-0169.1497561NaNNaNNaN
2017-01-01NaNNaN83.68576083.11894284.239578
2018-01-01NaNNaN83.94621383.37690784.501137
2019-01-01NaNNaN84.20835983.63774184.765411

Now let’s visualize the result of our forecast and the historical data of our time series, also let’s draw the confidence interval that we have obtained when making the prediction with 95% confidence.

def plot_forecasts(y_hist, y_true, y_pred, models):
    _, ax = plt.subplots(1, 1, figsize = (20, 7))
    y_true = y_true.merge(y_pred, how='left', on=['unique_id', 'ds'])
    df_plot = pd.concat([y_hist, y_true]).set_index('ds').tail(12*10)
    df_plot[['y'] + models].plot(ax=ax, linewidth=2)
    colors = ['orange', 'black', 'green']
    for model, color in zip(models, colors):
        ax.fill_between(df_plot.index, 
                        df_plot[f'{model}-lo-95'], 
                        df_plot[f'{model}-hi-95'],
                        alpha=.35,
                        color=color,
                        label=f'{model}-level-90')
    ax.set_title('', fontsize=22)
    ax.set_ylabel('', fontsize=20)
    ax.set_xlabel('Timestamp [t]', fontsize=20)
    ax.legend(prop={'size': 15})
    ax.grid(True)
plot_forecasts(train, test, forecast_df, models=['CES'])

Let’s plot the same graph using the plot function that comes in Statsforecast, as shown below.

sf.plot(df, forecast_df, level=[95])

Cross-validation

In previous steps, we’ve taken our historical data to predict the future. However, to asses its accuracy we would also like to know how the model would have performed in the past. To assess the accuracy and robustness of your models on your data perform Cross-Validation.

With time series data, Cross Validation is done by defining a sliding window across the historical data and predicting the period following it. This form of cross-validation allows us to arrive at a better estimation of our model’s predictive abilities across a wider range of temporal instances while also keeping the data in the training set contiguous as is required by our models.

The following graph depicts such a Cross Validation Strategy:

Perform time series cross-validation

Cross-validation of time series models is considered a best practice but most implementations are very slow. The statsforecast library implements cross-validation as a distributed operation, making the process less time-consuming to perform. If you have big datasets you can also perform Cross Validation in a distributed cluster using Ray, Dask or Spark.

In this case, we want to evaluate the performance of each model for the last 5 months (n_windows=5), forecasting every second months (step_size=12). Depending on your computer, this step should take around 1 min.

The cross_validation method from the StatsForecast class takes the following arguments.

  • df: training data frame

  • h (int): represents h steps into the future that are being forecasted. In this case, 12 months ahead.

  • step_size (int): step size between each window. In other words: how often do you want to run the forecasting processes.

  • n_windows(int): number of windows used for cross validation. In other words: what number of forecasting processes in the past do you want to evaluate.

crossvalidation_df = sf.cross_validation(df=train,
                                         h=horizon,
                                         step_size=12,
                                         n_windows=3)

The crossvaldation_df object is a new data frame that includes the following columns:

  • unique_id: index. If you dont like working with index just run crossvalidation_df.resetindex()
  • ds: datestamp or temporal index
  • cutoff: the last datestamp or temporal index for the n_windows.
  • y: true value
  • "model": columns with the model’s name and fitted value.
crossvalidation_df.head()
dscutoffyCES
unique_id
11984-01-011983-01-0175.38951174.952705
11985-01-011983-01-0175.47073475.161736
11986-01-011983-01-0175.77072975.377945
11987-01-011983-01-0176.21951375.590378
11988-01-011983-01-0176.37073575.806343

Model Evaluation

We can now compute the accuracy of the forecast using an appropiate accuracy metric. Here we’ll use the Root Mean Squared Error (RMSE). To do this, we first need to install datasetsforecast, a Python library developed by Nixtla that includes a function to compute the RMSE.

!pip install datasetsforecast
from datasetsforecast.losses import rmse

The function to compute the RMSE takes two arguments:

  1. The actual values.
  2. The forecasts, in this case, AutoCES.
rmse = rmse(crossvalidation_df['y'], crossvalidation_df["CES"])
print("RMSE using cross-validation: ", rmse)
RMSE using cross-validation:  0.40604722

As you have noticed, we have used the cross validation results to perform the evaluation of our model.

Now we are going to evaluate our model with the results of the predictions, we will use different types of metrics MAE, MAPE, MASE, RMSE, SMAPE to evaluate the accuracy.

from datasetsforecast.losses import (mae, mape, mase, rmse, smape)
def evaluate_performace(y_hist, y_hat, model):
    evaluation = {}
    evaluation[model] = {}
    for metric in [mase, mae, mape, rmse, smape]:
        metric_name = metric.__name__
        if metric_name == 'mase':
            evaluation[model][metric_name] = metric(y_hat['y'].values, 
                                                y_hat[model].values, 
                                                y_hist['y'].values, seasonality=24)
        else:
            evaluation[model][metric_name] = metric(y_hat['y'].values, y_hat[model].values)
    return pd.DataFrame(evaluation).T
evaluate_performace(train, Y_hat, model="CES")
maemapemasermsesmape
CES0.5563140.6699160.0870370.6301830.667133

Acknowledgements

We would like to thank Naren Castellon for writing this tutorial.

References

  1. Nixtla-AutoCES

  2. Rob J. Hyndman and George Athanasopoulos (2018). “Forecasting principles and practice, Time series cross-validation”.

  3. Complex exponential smoothing