Table of Contents

Introduction

The optimized Theta model is a time series forecasting method that is based on the decomposition of the time series into three components: trend, seasonality and noise. The model then forecasts the long-term trend and seasonality, and uses the noise to adjust the short-term forecasts. The optimized Theta model has been shown to be more accurate than other time series forecasting methods, especially for time series with complex trends and seasonality.

The optimized Theta model was developed by Athanasios N. Antoniadis and Nikolaos D. Tsonis in 2013. The model is based on the Theta forecasting method, which was developed by George E. P. Box and Gwilym M. Jenkins in 1976. Theta method is a time series forecasting method that is based on the decomposition of the time series into three components: trend, seasonality, and noise. The Theta model then forecasts the long-term trend and seasonality, and uses the noise to adjust the short-term forecasts.

The Theta Optimized model improves on the Theta method by using an optimization algorithm to find the best parameters for the model. The optimization algorithm is based on the Akaike loss function (AIC), which is a measure of the goodness of fit of a model to the data. The optimization algorithm looks for the parameters that minimize the AIC function.

The optimized Theta model has been shown to be more accurate than other time series forecasting methods, especially for time series with complex trends and seasonality. The model has been used to forecast a variety of time series, including sales, production, prices, and weather.

Below are some of the benefits of the optimized Theta model:

  • It is more accurate than other time series forecasting methods.
  • It’s easy to use.
  • Can be used to forecast a variety of time series.
  • It is flexible and can be adapted to different scenarios.

If you are looking for an easy-to-use and accurate time series forecasting method, the Optimized Theta model is a good choice.

The optimized Theta model can be applied in a variety of areas, including:

  • Sales: The optimized Theta model can be used to forecast sales of products or services. This can help companies make decisions about production, inventory, and marketing.
  • Production: The optimized Theta model can be used to forecast the production of goods or services. This can help companies ensure they have the capacity to meet demand and avoid overproduction.
  • Prices: The optimized Theta model can be used to forecast the prices of goods or services. This can help companies make decisions about pricing and marketing strategy.
  • Weather: The optimized Theta model can be used to forecast the weather. This can help companies make decisions about agricultural production, travel planning and risk management.
  • Other: The optimized Theta model can also be used to forecast other types of time series, including traffic, energy demand, and population.

The Optimized Theta model is a powerful tool that can be used to improve the accuracy of time series forecasts. It is easy to use and can be applied to a variety of areas. If you are looking for a tool to improve your time series forecasts, the Optimized Theta model is a good choice.

Optimized Theta Model (OTM)

Assume that either the time series Y1,YnY_1, \cdots Y_n is non-seasonal or it has been seasonally adjusted using the multiplicative classical decomposition approach.

Let XtX_t be the linear combination of two theta lines,

Xt=ωZt(θ1)+(1ω)Zt(θ2) \begin{equation} X_t=\omega \text{Z}_t (\theta_1) +(1-\omega) \text{Z}_t (\theta_2) \tag 1 \end{equation}

where ω[0,1]\omega \in [0,1] is the weight parameter. Assuming that θ1<1\theta_1 <1 and θ21\theta_2 \geq 1, the weight ω\omega can be derived as

ω:=ω(θ1,θ2)=θ21θ2θ1 \begin{equation} \omega:=\omega(\theta_1, \theta_2)=\frac{\theta_2 -1}{\theta_2 -\theta_1} \tag 2 \end{equation}

It is straightforward to see from Eqs. (1), (2) that Xt=Yt, t=1,nX_t=Y_t, \ t=1, \cdots n i.e., the weights are calculated properly in such a way that Eq. (1) reproduces the original series.

Theorem 1: Let θ1<1\theta_1 <1 and θ21\theta_2 \geq 1. We will prove that

  1. the linear system given by Xt=YtX_t=Y_t for all t=1,,nt=1, \cdots, n, where XtX_t is given by Eq.(4), has the single solution

ω=(θ21)/(θ2θ1)\omega= (\theta_2 -1)/(\theta_2 - \theta_1)

  1. the error of choosing a non-optimal weight ωδ=ω+δ\omega_{\delta} =\omega + \delta is proportional to the error for a simple linear regression model.

In Theorem 1 , we prove that the solution is unique and that the error from not choosing the optimal weights (ω\omega and 1ω1-\omega) s proportional to the error of a linear regression model. As a consequence, the STheta method is given simply by setting θ1=0\theta_1=0 and θ2=2\theta_2=2 while from Eq. (2) we get ω=0.5\omega=0.5. Thus, Eqs. (1), (2) allow us to construct a generalisation of the Theta model that maintains the re-composition propriety of the original time series for any theta lines Zt(θ1)\text{Z}_t (\theta_1) and Zt(θ2)\text{Z}_t (\theta_2).

In order to maintain the modelling of the long-term component and retain a fair comparison with the STheta method, in this work we fix θ1=0\theta_1=0 and focus on the optimisation of the short-term component, θ2=0\theta_2=0 with θ1\theta \geq 1. Thus, θ\theta is the only parameter that requires estimation so far. The theta decomposition is now given by

Yt=(11θ)(An+Bnt)+1θZt(θ), t=1,,nY_t=(1-\frac{1}{\theta}) (\text{A}_n+\text{B}_n t)+ \frac{1}{\theta} \text{Z}_t (\theta), \ t=1, \cdots , n

The hh -step-ahead forecasts calculated at origin are given by

Y^n+hn=(11θ)[An+Bn(n+h)]+1θZ~n+hn(θ) \begin{equation} \hat Y_{n+h|n} = (1-\frac{1}{\theta}) [\text{A}_n+\text{B}_n (n+h)]+ \frac{1}{\theta} \tilde {\text{Z}}_{n+h|n} (\theta) \tag 3 \end{equation}

where Z~n+hn(θ)=Z~n+1n(θ)=αi=0n1(1α)iZni(θ)+(1α)n0\tilde {\text{Z}}_{n+h|n} (\theta)=\tilde {\text{Z}}_{n+1|n} (\theta)=\alpha \sum_{i=0}^{n-1}(1-\alpha)^i \text{Z}_{n-i}(\theta)+(1-\alpha)^n \ell_{0}^{*} is the extrapolation of Zt(θ)\text{Z}_t(\theta) by an SES model with 0R\ell_{0}^{*} \in \mathbb{R} as the initial level parameter and α(0,1)\alpha \in (0,1) as the smoothing parameter. Note that for θ=2\theta=2 Eq. (3) corresponds to Step 4 of the STheta algorithm. After some algebra, we can write

Z~n+1n(θ)=θn+(1θ){An[1(1α)n]+Bn[n+(11α)[1(1α)n]]} \begin{equation} \tilde {\text{Z}}_{n+1|n} (\theta)=\theta \ell{n}+(1-\theta) \{ \text{A}_n [1-(1-\alpha)^n] + \text{B}_n [n+(1-\frac{1}{\alpha}) [1-(1-\alpha)^n] ] \} \tag 4 \end{equation}

where t=αYt+(1α)t1\ell_{t}=\alpha Y_t +(1-\alpha) \ell_{t-1} for t=1,,nt=1, \cdots, n and 0=0/θ\ell_{0}=\ell_{0}^{*}/\theta.

In the light of Eqs. (3), (4), we suggest four stochastic approaches. These approaches differ due to the parameter θ\theta which may be either fixed at two or optimised, and the coefficients An\text{A}_n and Bn\text{B}_n, which can be either fixed or dynamic functions. To formulate the state space models, it is helpful to adopt μt\mu_{t} as the one-step-ahead forecast at origin t1t-1 and εt\varepsilon_{t} as the respective additive error, i.e., εt=Ytμt\varepsilon_{t}=Y_t - \mu_{t} if μt=Y^tt1\mu_{t}= \hat Y_{t|t-1}. We assume {εt}\{ \varepsilon_{t} \} to be a Gaussian white noise process with mean zero and variance σ2\sigma^2.

More on Optimised Theta models

Let An\text{A}_n and Bn\text{B}_n be fixed coefficients for all t=1,,nt=1, \cdots, n so that Eqs. (3), (4) configure the state space model given by

Yt=μt+εt \begin{equation} Y_t=\mu_{t}+\varepsilon_{t} \tag 5 \end{equation}
μt=t1+(11θ){(1α)t1An+[1(1α)tαBn] \begin{equation} \mu_{t}=\ell_{t-1}+(1-\frac{1}{\theta}) \{(1-\alpha)^{t-1} \text{A}_n +[\frac{1-(1-\alpha)^t}{\alpha} \text{B}_n] \tag 6 \end{equation}
t=αYt+(1α)t1 \begin{equation} \ell_{t}=\alpha Y_t +(1-\alpha)\ell_{t-1} \tag 7 \end{equation}

with parameters 0R\ell_{0} \in \mathbb{R}, α(0,1)\alpha \in (0,1) and θ[1,)\theta \in [1,\infty) . The parameter θ\theta is to be estimated along with α\alpha and 0\ell_{0} We call this the optimised Theta model (OTM).

The hh-step-ahead forecast at origin nn is given by

Y^n+hn=E[Yn+hY1,,Yn]=n+(11θ){(1α)nAn+[(h1)+1(1α)n+1α]Bn}\hat Y_{n+h|n}=E[Y_{n+h}|Y_1,\cdots, Y_n]=\ell_{n}+(1-\frac{1}{\theta}) \{(1-\alpha)^n \text{A}_n +[(h-1) + \frac{1-(1-\alpha)^{n+1}}{\alpha}] \text{B}_n \}

which is equivalent to Eq. (3). The conditional variance Var[Yn+hY1,,Yn]=[1+(h1)α2]σ2\text{Var}[Y_{n+h}|Y_1, \cdots, Y_n]=[1+(h-1)\alpha^2]\sigma^2 can be computed easily from the state space model. Thus, the (1α)%(1-\alpha)\% prediction interval for Yn+hY_{n+h} is given by Y^n+hn ± q1α/2[1+(h1)α2]σ2\hat Y_{n+h|n} \ \pm \ q_{1-\alpha/2} \sqrt{[1+(h-1)\alpha^2 ]\sigma^2 }

For θ=2\theta=2 OTM reproduces the forecasts of the STheta method; hereafter, we will refer to this particular case as the standard Theta model (STM).

Theorem 2: The SES-d (0,α,b)(\ell_{0}^{**}, \alpha, b) model, where 0R,α(0,1)\ell_{0}^{**} \in \mathbb{R}, \alpha \in (0,1) and bRb \in \mathbb{R} is equivalent to OTM(0,α,θ)\text{OTM} (\ell_{0}, \alpha, \theta ) where 0R\ell_{0} \in \mathbb{R} and θ1\theta \geq 1, if

0=0+(11θ)An  and  b=(11θ)Bn\ell_{0}^{**} = \ell_{0} + (1- \frac{1}{\theta} )A_n \ \ and \ \ b=(1-\frac{1}{\theta} )B_n

In Theorem 2, we show that OTM is mathematically equivalent to the SES-d model. As a corollary of Theorem 2, STM is mathematically equivalent to SES-d with b=12Bnb=\frac{1}{2} \text{B}_n. Therefore, for θ=2\theta=2 the corollary also re-confirms the H&B result on the relationship between STheta and the SES-d model.

Loading libraries and data

Tip

Statsforecast will be needed. To install, see instructions.

Next, we import plotting libraries and configure the plotting style.

import matplotlib.pyplot as plt
import seaborn as sns
from statsmodels.graphics.tsaplots import plot_acf, plot_pacf
plt.style.use('grayscale') # fivethirtyeight  grayscale  classic
plt.rcParams['lines.linewidth'] = 1.5
dark_style = {
    'figure.facecolor': '#008080',  # #212946
    'axes.facecolor': '#008080',
    'savefig.facecolor': '#008080',
    'axes.grid': True,
    'axes.grid.which': 'both',
    'axes.spines.left': False,
    'axes.spines.right': False,
    'axes.spines.top': False,
    'axes.spines.bottom': False,
    'grid.color': '#000000',  #2A3459
    'grid.linewidth': '1',
    'text.color': '0.9',
    'axes.labelcolor': '0.9',
    'xtick.color': '0.9',
    'ytick.color': '0.9',
    'font.size': 12 }
plt.rcParams.update(dark_style)


from pylab import rcParams
rcParams['figure.figsize'] = (18,7)

Read Data

import pandas as pd

df = pd.read_csv("https://raw.githubusercontent.com/Naren8520/Serie-de-tiempo-con-Machine-Learning/main/Data/milk_production.csv", usecols=[1,2])
df.head()
monthproduction
01962-01-01589
11962-02-01561
21962-03-01640
31962-04-01656
41962-05-01727

The input to StatsForecast is always a data frame in long format with three columns: unique_id, ds and y:

  • The unique_id (string, int or category) represents an identifier for the series.

  • The ds (datestamp) column should be of a format expected by Pandas, ideally YYYY-MM-DD for a date or YYYY-MM-DD HH:MM:SS for a timestamp.

  • The y (numeric) represents the measurement we wish to forecast.

df["unique_id"]="1"
df.columns=["ds", "y", "unique_id"]
df.head()
dsyunique_id
01962-01-015891
11962-02-015611
21962-03-016401
31962-04-016561
41962-05-017271
print(df.dtypes)
ds           object
y             int64
unique_id    object
dtype: object

We can see that our time variable (ds) is in an object format, we need to convert to a date format

df["ds"] = pd.to_datetime(df["ds"])

Explore Data with the plot method

Plot some series using the plot method from the StatsForecast class. This method prints a random series from the dataset and is useful for basic EDA.

from statsforecast import StatsForecast

StatsForecast.plot(df, engine="matplotlib")

Autocorrelation plots

fig, axs = plt.subplots(nrows=1, ncols=2)

plot_acf(df["y"],  lags=30, ax=axs[0],color="fuchsia")
axs[0].set_title("Autocorrelation");

plot_pacf(df["y"],  lags=30, ax=axs[1],color="lime")
axs[1].set_title('Partial Autocorrelation')

plt.show();

Decomposition of the time series

How to decompose a time series and why?

In time series analysis to forecast new values, it is very important to know past data. More formally, we can say that it is very important to know the patterns that values follow over time. There can be many reasons that cause our forecast values to fall in the wrong direction. Basically, a time series consists of four components. The variation of those components causes the change in the pattern of the time series. These components are:

  • Level: This is the primary value that averages over time.
  • Trend: The trend is the value that causes increasing or decreasing patterns in a time series.
  • Seasonality: This is a cyclical event that occurs in a time series for a short time and causes short-term increasing or decreasing patterns in a time series.
  • Residual/Noise: These are the random variations in the time series.

Combining these components over time leads to the formation of a time series. Most time series consist of level and noise/residual and trend or seasonality are optional values.

If seasonality and trend are part of the time series, then there will be effects on the forecast value. As the pattern of the forecasted time series may be different from the previous time series.

The combination of the components in time series can be of two types: * Additive * Multiplicative

###Additive time series

If the components of the time series are added to make the time series. Then the time series is called the additive time series. By visualization, we can say that the time series is additive if the increasing or decreasing pattern of the time series is similar throughout the series. The mathematical function of any additive time series can be represented by: y(t)=level+Trend+seasonality+noisey(t) = level + Trend + seasonality + noise

Multiplicative time series

If the components of the time series are multiplicative together, then the time series is called a multiplicative time series. For visualization, if the time series is having exponential growth or decline with time, then the time series can be considered as the multiplicative time series. The mathematical function of the multiplicative time series can be represented as.

y(t)=LevelTrendseasonalityNoisey(t) = Level * Trend * seasonality * Noise

Additive

from statsmodels.tsa.seasonal import seasonal_decompose 
a = seasonal_decompose(df["y"], model = "additive", period=12)
a.plot();

Multiplicative

from statsmodels.tsa.seasonal import seasonal_decompose 
a = seasonal_decompose(df["y"], model = "Multiplicative", period=12)
a.plot();

Split the data into training and testing

Let’s divide our data into sets 1. Data to train our Optimized Theta model. 2. Data to test our model

For the test data we will use the last 12 months to test and evaluate the performance of our model.

train = df[df.ds<='1974-12-01'] 
test = df[df.ds>'1974-12-01']
train.shape, test.shape
((156, 3), (12, 3))

Now let’s plot the training data and the test data.

sns.lineplot(train,x="ds", y="y", label="Train", linestyle="--")
sns.lineplot(test, x="ds", y="y", label="Test")
plt.title("")
plt.ylabel("Monthly Milk Production")
plt.xlabel("Monthly")
plt.show()

Implementation of OptimizedTheta with StatsForecast

To also know more about the parameters of the functions of the OptimizedTheta Model, they are listed below. For more information, visit the documentation.

season_length : int
    Number of observations per unit of time. Ex: 24 Hourly data.
decomposition_type : str
    Sesonal decomposition type, 'multiplicative' (default) or 'additive'.
alias : str
    Custom name of the model.
prediction_intervals : Optional[ConformalIntervals]
    Information to compute conformal prediction intervals.
    By default, the model will compute the native prediction
    intervals.

Load libraries

from statsforecast import StatsForecast
from statsforecast.models import OptimizedTheta

Instantiating Model

Import and instantiate the models. Setting the argument is sometimes tricky. This article on Seasonal periods by the master, Rob Hyndmann, can be useful for season_length.

season_length = 12 # Monthly data 
horizon = len(test) # number of predictions

models = [OptimizedTheta(season_length=season_length, 
                decomposition_type="additive")] # multiplicative   additive

We fit the models by instantiating a new StatsForecast object with the following parameters:

models: a list of models. Select the models you want from models and import them.

  • freq: a string indicating the frequency of the data. (See pandas’ available frequencies.)

  • n_jobs: n_jobs: int, number of jobs used in the parallel processing, use -1 for all cores.

  • fallback_model: a model to be used if a model fails.

Any settings are passed into the constructor. Then you call its fit method and pass in the historical data frame.

sf = StatsForecast(df=train,
                   models=models,
                   freq='MS', 
                   n_jobs=-1)

Fit the Model

sf.fit()
StatsForecast(models=[OptimizedTheta])

Let’s see the results of our Optimized Theta Model (OTM). We can observe it with the following instruction:

result=sf.fitted_[0,0].model_
print(result.keys())
print(result['fit'])
dict_keys(['mse', 'amse', 'fit', 'residuals', 'm', 'states', 'par', 'n', 'modeltype', 'mean_y', 'decompose', 'decomposition_type', 'seas_forecast', 'fitted'])
results(x=array([-83.14191626,   0.73681394,  12.45013763]), fn=10.448217519858634, nit=47, simplex=array([[-58.73988124,   0.7441127 ,  11.69842922],
       [-49.97233449,   0.73580297,  11.41787513],
       [-83.14191626,   0.73681394,  12.45013763],
       [-77.04867427,   0.73498431,  11.99254037]]))

Let us now visualize the residuals of our models.

As we can see, the result obtained above has an output in a dictionary, to extract each element from the dictionary we are going to use the .get() function to extract the element and then we are going to save it in a pd.DataFrame().

residual=pd.DataFrame(result.get("residuals"), columns=["residual Model"])
residual
residual Model
0-271.899414
1-114.671692
24.768066
153-60.233887
154-92.472839
155-44.143982
import scipy.stats as stats

fig, axs = plt.subplots(nrows=2, ncols=2)

residual.plot(ax=axs[0,0])
axs[0,0].set_title("Residuals");

sns.distplot(residual, ax=axs[0,1]);
axs[0,1].set_title("Density plot - Residual");

stats.probplot(residual["residual Model"], dist="norm", plot=axs[1,0])
axs[1,0].set_title('Plot Q-Q')

plot_acf(residual,  lags=35, ax=axs[1,1],color="fuchsia")
axs[1,1].set_title("Autocorrelation");

plt.show();

Forecast Method

If you want to gain speed in productive settings where you have multiple series or models we recommend using the StatsForecast.forecast method instead of .fit and .predict.

The main difference is that the .forecast doest not store the fitted values and is highly scalable in distributed environments.

The forecast method takes two arguments: forecasts next h (horizon) and level.

  • h (int): represents the forecast h steps into the future. In this case, 12 months ahead.

  • level (list of floats): this optional parameter is used for probabilistic forecasting. Set the level (or confidence percentile) of your prediction interval. For example, level=[90] means that the model expects the real value to be inside that interval 90% of the times.

The forecast object here is a new data frame that includes a column with the name of the model and the y hat values, as well as columns for the uncertainty intervals. Depending on your computer, this step should take around 1min. (If you want to speed things up to a couple of seconds, remove the AutoModels like ARIMA and Theta)

Y_hat = sf.forecast(horizon, fitted=True)
Y_hat
dsOptimizedTheta
unique_id
11975-01-01839.682800
11975-02-01802.071838
11975-03-01896.117126
11975-10-01824.135498
11975-11-01795.691162
11975-12-01833.316345

Let’s visualize the fitted values

values=sf.forecast_fitted_values()
values.head()
dsyOptimizedTheta
unique_id
11962-01-01589.0860.899414
11962-02-01561.0675.671692
11962-03-01640.0635.231934
11962-04-01656.0614.731323
11962-05-01727.0609.770752
StatsForecast.plot(values)

Adding 95% confidence interval with the forecast method

sf.forecast(h=horizon, level=[95])
dsOptimizedThetaOptimizedTheta-lo-95OptimizedTheta-hi-95
unique_id
11975-01-01839.682800742.509583955.414307
11975-02-01802.071838643.581360945.119263
11975-03-01896.117126710.7852171065.057495
11975-10-01824.135498555.9489751084.320312
11975-11-01795.691162503.1480101036.519531
11975-12-01833.316345530.2599491106.636963
Y_hat=Y_hat.reset_index()
Y_hat
unique_iddsOptimizedTheta
011975-01-01839.682800
111975-02-01802.071838
211975-03-01896.117126
911975-10-01824.135498
1011975-11-01795.691162
1111975-12-01833.316345
# Merge the forecasts with the true values
test['unique_id'] = test['unique_id'].astype(int)
Y_hat1 = test.merge(Y_hat, how='left', on=['unique_id', 'ds'])
Y_hat1
dsyunique_idOptimizedTheta
01975-01-018341839.682800
11975-02-017821802.071838
21975-03-018921896.117126
91975-10-018271824.135498
101975-11-017971795.691162
111975-12-018431833.316345
fig, ax = plt.subplots(1, 1)
plot_df = pd.concat([train, Y_hat1]).set_index('ds')
plot_df[['y', "OptimizedTheta"]].plot(ax=ax, linewidth=2)
ax.set_title(' Forecast', fontsize=22)
ax.set_ylabel('Monthly Milk Production', fontsize=20)
ax.set_xlabel('Month [t]', fontsize=20)
ax.legend(prop={'size': 15})
ax.grid(True)

Predict method with confidence interval

To generate forecasts use the predict method.

The predict method takes two arguments: forecasts the next h (for horizon) and level.

  • h (int): represents the forecast h steps into the future. In this case, 12 months ahead.

  • level (list of floats): this optional parameter is used for probabilistic forecasting. Set the level (or confidence percentile) of your prediction interval. For example, level=[95] means that the model expects the real value to be inside that interval 95% of the times.

The forecast object here is a new data frame that includes a column with the name of the model and the y hat values, as well as columns for the uncertainty intervals.

This step should take less than 1 second.

sf.predict(h=horizon)
dsOptimizedTheta
unique_id
11975-01-01839.682800
11975-02-01802.071838
11975-03-01896.117126
11975-10-01824.135498
11975-11-01795.691162
11975-12-01833.316345
forecast_df = sf.predict(h=horizon, level=[80,95]) 
forecast_df
dsOptimizedThetaOptimizedTheta-lo-80OptimizedTheta-hi-80OptimizedTheta-lo-95OptimizedTheta-hi-95
unique_id
11975-01-01839.682800766.665955928.326233742.509583955.414307
11975-02-01802.071838704.290100899.335876643.581360945.119263
11975-03-01896.117126761.3347171007.408630710.7852171065.057495
11975-10-01824.135498623.904114996.567322555.9489751084.320312
11975-11-01795.691162576.546753975.490967503.1480101036.519531
11975-12-01833.316345606.7139891033.886230530.2599491106.636963

We can join the forecast result with the historical data using the pandas function pd.concat(), and then be able to use this result for graphing.

pd.concat([df, forecast_df]).set_index('ds')
yunique_idOptimizedThetaOptimizedTheta-lo-80OptimizedTheta-hi-80OptimizedTheta-lo-95OptimizedTheta-hi-95
ds
1962-01-01589.01NaNNaNNaNNaNNaN
1962-02-01561.01NaNNaNNaNNaNNaN
1962-03-01640.01NaNNaNNaNNaNNaN
1975-10-01NaNNaN824.135498623.904114996.567322555.9489751084.320312
1975-11-01NaNNaN795.691162576.546753975.490967503.1480101036.519531
1975-12-01NaNNaN833.316345606.7139891033.886230530.2599491106.636963

Now let’s visualize the result of our forecast and the historical data of our time series, also let’s draw the confidence interval that we have obtained when making the prediction with 95% confidence.

def plot_forecasts(y_hist, y_true, y_pred, models):
    _, ax = plt.subplots(1, 1, figsize = (20, 7))
    y_true = y_true.merge(y_pred, how='left', on=['unique_id', 'ds'])
    df_plot = pd.concat([y_hist, y_true]).set_index('ds').tail(12*10)
    df_plot[['y'] + models].plot(ax=ax, linewidth=3 , )
    colors = ['green', "lime"]
    ax.fill_between(df_plot.index, 
                df_plot['OptimizedTheta-lo-80'], 
                df_plot['OptimizedTheta-lo-80'],
                alpha=.20,
                color='orange',
                label='OptimizedTheta_level_80')
    ax.fill_between(df_plot.index, 
                df_plot['OptimizedTheta-lo-95'], 
                df_plot['OptimizedTheta-hi-95'],
                alpha=.3,
                color='lime',
                label='OptimizedTheta_level_95')
    ax.set_title('', fontsize=22)
    ax.set_ylabel("Montly Mil Production", fontsize=20)
    ax.set_xlabel('Month', fontsize=20)
    ax.legend(prop={'size': 20})
    ax.grid(True)
    plt.show()
plot_forecasts(train, test, forecast_df, models=['OptimizedTheta'])

Let’s plot the same graph using the plot function that comes in Statsforecast, as shown below.

sf.plot(df, forecast_df, level=[95])

Cross-validation

In previous steps, we’ve taken our historical data to predict the future. However, to asses its accuracy we would also like to know how the model would have performed in the past. To assess the accuracy and robustness of your models on your data perform Cross-Validation.

With time series data, Cross Validation is done by defining a sliding window across the historical data and predicting the period following it. This form of cross-validation allows us to arrive at a better estimation of our model’s predictive abilities across a wider range of temporal instances while also keeping the data in the training set contiguous as is required by our models.

The following graph depicts such a Cross Validation Strategy:

Perform time series cross-validation

Cross-validation of time series models is considered a best practice but most implementations are very slow. The statsforecast library implements cross-validation as a distributed operation, making the process less time-consuming to perform. If you have big datasets you can also perform Cross Validation in a distributed cluster using Ray, Dask or Spark.

In this case, we want to evaluate the performance of each model for the last 5 months (n_windows=5), forecasting every second months (step_size=12). Depending on your computer, this step should take around 1 min.

The cross_validation method from the StatsForecast class takes the following arguments.

  • df: training data frame

  • h (int): represents h steps into the future that are being forecasted. In this case, 12 months ahead.

  • step_size (int): step size between each window. In other words: how often do you want to run the forecasting processes.

  • n_windows(int): number of windows used for cross validation. In other words: what number of forecasting processes in the past do you want to evaluate.

crossvalidation_df = sf.cross_validation(df=train,
                                         h=horizon,
                                         step_size=12,
                                         n_windows=3)

The crossvaldation_df object is a new data frame that includes the following columns:

  • unique_id: index. If you dont like working with index just run crossvalidation_df.resetindex()
  • ds: datestamp or temporal index
  • cutoff: the last datestamp or temporal index for the n_windows.
  • y: true value
  • "model": columns with the model’s name and fitted value.
crossvalidation_df
dscutoffyOptimizedTheta
unique_id
11972-01-011971-12-01826.0828.836365
11972-02-011971-12-01799.0792.592346
11972-03-011971-12-01890.0883.269592
11974-10-011973-12-01812.0812.183838
11974-11-011973-12-01773.0783.898376
11974-12-011973-12-01813.0821.124329

Model Evaluation

We can now compute the accuracy of the forecast using an appropiate accuracy metric. Here we’ll use the Root Mean Squared Error (RMSE). To do this, we first need to install datasetsforecast, a Python library developed by Nixtla that includes a function to compute the RMSE.

!pip install datasetsforecast
from datasetsforecast.losses import rmse

The function to compute the RMSE takes two arguments:

  1. The actual values.
  2. The forecasts, in this case, Optimized Theta Model (OTM).
rmse = rmse(crossvalidation_df['y'], crossvalidation_df["OptimizedTheta"])
print("RMSE using cross-validation: ", rmse)
RMSE using cross-validation:  14.504839

As you have noticed, we have used the cross validation results to perform the evaluation of our model.

Now we are going to evaluate our model with the results of the predictions, we will use different types of metrics MAE, MAPE, MASE, RMSE, SMAPE to evaluate the accuracy.

from datasetsforecast.losses import (mae, mape, mase, rmse, smape)
def evaluate_performace(y_hist, y_true, y_pred, model):
    y_true = y_true.merge(y_pred, how='left', on=['unique_id', 'ds'])
    evaluation = {}
    evaluation[model] = {}
    for metric in [mase, mae, mape, rmse, smape]:
        metric_name = metric.__name__
        if metric_name == 'mase':
            evaluation[model][metric_name] = metric(y_true['y'].values, 
                                                y_true[model].values, 
                                                y_hist['y'].values, seasonality=12)
        else:
            evaluation[model][metric_name] = metric(y_true['y'].values, y_true[model].values)
    return pd.DataFrame(evaluation).T
evaluate_performace(train, test, Y_hat, model="OptimizedTheta")
maemapemasermsesmape
OptimizedTheta6.7402090.7827530.303128.7015010.778689

Acknowledgements

We would like to thank Naren Castellon for writing this tutorial.

References

  1. Kostas I. Nikolopoulos, Dimitrios D. Thomakos. Forecasting with the Theta Method-Theory and Applications. 2019 John Wiley & Sons Ltd.
  2. Jose A. Fiorucci, Tiago R. Pellegrini, Francisco Louzada, Fotios Petropoulos, Anne B. Koehler (2016). “Models for optimising the theta method and their relationship to state space models”. International Journal of Forecasting.
  3. Nixtla Parameters.
  4. Pandas available frequencies.
  5. Rob J. Hyndman and George Athanasopoulos (2018). “Forecasting principles and practice, Time series cross-validation”..
  6. Seasonal periods- Rob J Hyndman.