Table of Contents

What is AutoArima with StatsForecast?

An autoARIMA is a time series model that uses an automatic process to select the optimal ARIMA (Autoregressive Integrated Moving Average) model parameters for a given time series. ARIMA is a widely used statistical model for modeling and predicting time series.

The process of automatic parameter selection in an autoARIMA model is performed using statistical and optimization techniques, such as the Akaike Information Criterion (AIC) and cross-validation, to identify optimal values for autoregression, integration, and moving average parameters. of the ARIMA model.

Automatic parameter selection is useful because it can be difficult to determine the optimal parameters of an ARIMA model for a given time series without a thorough understanding of the underlying stochastic process that generates the time series. The autoARIMA model automates the parameter selection process and can provide a fast and effective solution for time series modeling and forecasting.

The statsforecast.models library brings the AutoARIMA function from Python provides an implementation of autoARIMA that allows to automatically select the optimal parameters for an ARIMA model given a time series.

Definition of the Arima model

An Arima model (autoregressive integrated moving average) process is the combination of an autoregressive process AR(p), integration I(d), and the moving average process MA(q).

Just like the ARMA process, the ARIMA process states that the present value is dependent on past values, coming from the AR(p) portion, and past errors, coming from the MA(q) portion. However, instead of using the original series, denoted as yt, the ARIMA process uses the differenced series, denoted as yty'_{t}. Note that yty'_{t} can represent a series that has been differenced more than once.

Therefore, the mathematical expression of the ARIMA(p,d,q) process states that the present value of the differenced series yty'_{t} is equal to the sum of a constant CC, past values of the differenced series ϕpytp\phi_{p}y'_{t-p}, the mean of the differenced series μ\mu, past error terms θqεtq\theta_{q}\varepsilon_{t-q}, and a current error term εt\varepsilon_{t}, as shown in equation

where yty'_{t} is the differenced series (it may have been differenced more than once). The “predictors” on the right hand side include both lagged values of yty_{t} and lagged errors. We call this an ARIMA( p,d,q) model, where

porder of the autoregressive part
ddegree of first differencing involved
qorder of the moving average part

The same stationarity and invertibility conditions that are used for autoregressive and moving average models also apply to an ARIMA model.

Many of the models we have already discussed are special cases of the ARIMA model, as shown in Table

Modelp d qDifferencedMethod
Arima(0,0,0)0 0 0yt=Yty_t=Y_tWhite noise
ARIMA (0,1,0)0 1 0yt=YtYt1y_t = Y_t - Y_{t-1}Random walk
ARIMA (0,2,0)0 2 0yt=Yt2Yt1+Yt2y_t = Y_t - 2Y_{t-1} + Y_{t-2}Constant
ARIMA (1,0,0)1 0 0Y^t=μ+Φ1Yt1+ϵ\hat Y_t = \mu + \Phi_1 Y_{t-1} + \epsilonAR(1): AR(1): First-order regression model
ARIMA (2, 0, 0)2 0 0Y^t=Φ0+Φ1Yt1+Φ2Yt2+ϵ\hat Y_t = \Phi_0 + \Phi_1 Y_{t-1} + \Phi_2 Y_{t-2} + \epsilonAR(2): Second-order regression model
ARIMA (1, 1, 0)1 1 0Y^t=μ+Yt1+Φ1(Yt1Yt2)\hat Y_t = \mu + Y_{t-1} + \Phi_1 (Y_{t-1}- Y_{t-2})Differenced first-order autoregressive model
ARIMA (0, 1, 1)0 1 1Y^t=Yt1Φ1et1\hat Y_t = Y_{t-1} - \Phi_1 e^{t-1}Simple exponential smoothing
ARIMA (0, 0, 1)0 0 1Y^t=μ0+ϵtω1ϵt1\hat Y_t = \mu_0+ \epsilon_t - \omega_1 \epsilon_{t-1}MA(1): First-order regression model
ARIMA (0, 0, 2)0 0 2Y^t=μ0+ϵtω1ϵt1ω2ϵt2\hat Y_t = \mu_0+ \epsilon_t - \omega_1 \epsilon_{t-1} - \omega_2 \epsilon_{t-2}MA(2): Second-order regression model
ARIMA (1, 0, 1)1 0 1Y^t=Φ0+Φ1Yt1+ϵtω1ϵt1\hat Y_t = \Phi_0 + \Phi_1 Y_{t-1}+ \epsilon_t - \omega_1 \epsilon_{t-1}ARMA model
ARIMA (1, 1, 1)1 1 1ΔYt=Φ1Yt1+ϵtω1ϵt1\Delta Y_t = \Phi_1 Y_{t-1} + \epsilon_t - \omega_1 \epsilon_{t-1}ARIMA model
ARIMA (1, 1, 2)1 1 2Y^t=Yt1+Φ1(Yt1Yt2)Θ1et1Θ1et1\hat Y_t = Y_{t-1} + \Phi_1 (Y_{t-1} - Y_{t-2} )- \Theta_1 e_{t-1} - \Theta_1 e_{t-1}Damped-trend linear Exponential smoothing
ARIMA (0, 2, 1) OR (0,2,2)0 2 1Y^t=2Yt1Yt2Θ1et1Θ2et2\hat Y_t = 2 Y_{t-1} - Y_{t-2} - \Theta_1 e_{t-1} - \Theta_2 e_{t-2}Linear exponential smoothing

Once we start combining components in this way to form more complicated models, it is much easier to work with the backshift notation. For example, Equation (1) can be written in backshift notation as:

Selecting appropriate values for p, d and q can be difficult. However, the AutoARIMA() function from statsforecast will do it for you automatically.

For more information here

Loading libraries and data

Using an AutoARIMA() model to model and predict time series has several advantages, including:

  1. Automation of the parameter selection process: The AutoARIMA() function automates the ARIMA model parameter selection process, which can save the user time and effort by eliminating the need to manually try different combinations of parameters.

  2. Reduction of prediction error: By automatically selecting optimal parameters, the ARIMA model can improve the accuracy of predictions compared to manually selected ARIMA models.

  3. Identification of complex patterns: The AutoARIMA() function can identify complex patterns in the data that may be difficult to detect visually or with other time series modeling techniques.

  4. Flexibility in the choice of the parameter selection methodology: The ARIMA Model can use different methodologies to select the optimal parameters, such as the Akaike Information Criterion (AIC), cross-validation and others, which allows the user to choose the methodology that best suits their needs.

In general, using the AutoARIMA() function can help improve the efficiency and accuracy of time series modeling and forecasting, especially for users who are inexperienced with manual parameter selection for ARIMA models.

Main results

We compared accuracy and speed against pmdarima, Rob Hyndman’s forecast package and Facebook’s Prophet. We used the Daily, Hourly and Weekly data from the M4 competition.

The following table summarizes the results. As can be seen, our auto_arima is the best model in accuracy (measured by the MASE loss) and time, even compared with the original implementation in R.

datasetmetricauto_arima_nixtlaauto_arima_pmdarima [1]auto_arima_rprophet
DailyMASE3.263.354.4614.26
Dailytime1.4127.611.81514.33
HourlyMASE0.921.021.78
Hourlytime12.9223.9517.27
WeeklyMASE2.342.472.587.29
Weeklytime0.422.920.2219.82

[1] The model auto_arima from pmdarima had a problem with Hourly data. An issue was opened.

The following table summarizes the data details.

groupn_seriesmean_lengthstd_lengthmin_lengthmax_length
Daily4,2272,3711,7561079,933
Hourly4149011277481,008
Weekly3591,035707932,610

Loading libraries and data

Tip

Statsforecast will be needed. To install, see instructions.

Next, we import plotting libraries and configure the plotting style.

import numpy as np
import pandas as pd

import scipy.stats as stats
import matplotlib.pyplot as plt
import seaborn as sns
from statsmodels.graphics.tsaplots import plot_acf
from statsmodels.graphics.tsaplots import plot_pacf
plt.style.use('fivethirtyeight')
plt.rcParams['lines.linewidth'] = 1.5
dark_style = {
    'figure.facecolor': '#212946',
    'axes.facecolor': '#212946',
    'savefig.facecolor':'#212946',
    'axes.grid': True,
    'axes.grid.which': 'both',
    'axes.spines.left': False,
    'axes.spines.right': False,
    'axes.spines.top': False,
    'axes.spines.bottom': False,
    'grid.color': '#2A3459',
    'grid.linewidth': '1',
    'text.color': '0.9',
    'axes.labelcolor': '0.9',
    'xtick.color': '0.9',
    'ytick.color': '0.9',
    'font.size': 12 }
plt.rcParams.update(dark_style)

from pylab import rcParams
rcParams['figure.figsize'] = (18,7)

Loading Data

df = pd.read_csv("https://raw.githubusercontent.com/Naren8520/Serie-de-tiempo-con-Machine-Learning/main/Data/candy_production.csv")
df.head()
observation_dateIPG3113N
01972-01-0185.6945
11972-02-0171.8200
21972-03-0166.0229
31972-04-0164.5645
41972-05-0165.0100

The input to StatsForecast is always a data frame in long format with three columns: unique_id, ds and y:

  • The unique_id (string, int or category) represents an identifier for the series.

  • The ds (datestamp) column should be of a format expected by Pandas, ideally YYYY-MM-DD for a date or YYYY-MM-DD HH:MM:SS for a timestamp.

  • The y (numeric) represents the measurement we wish to forecast.

df["unique_id"]="1"
df.columns=["ds", "y", "unique_id"]
df.head()
dsyunique_id
01972-01-0185.69451
11972-02-0171.82001
21972-03-0166.02291
31972-04-0164.56451
41972-05-0165.01001
print(df.dtypes)
ds            object
y            float64
unique_id     object
dtype: object

We need to convert ds from the object type to datetime.

df["ds"] = pd.to_datetime(df["ds"])

Explore data with the plot method

Plot a series using the plot method from the StatsForecast class. This method prints a random series from the dataset and is useful for basic EDA.

from statsforecast import StatsForecast

StatsForecast.plot(df, engine="matplotlib")

Autocorrelation plots

fig, axs = plt.subplots(nrows=1, ncols=2)

plot_acf(df["y"],  lags=60, ax=axs[0],color="fuchsia")
axs[0].set_title("Autocorrelation");

plot_pacf(df["y"],  lags=60, ax=axs[1],color="lime")
axs[1].set_title('Partial Autocorrelation')

plt.show();

Decomposition of the time series

How to decompose a time series and why?

In time series analysis to forecast new values, it is very important to know past data. More formally, we can say that it is very important to know the patterns that values follow over time. There can be many reasons that cause our forecast values to fall in the wrong direction. Basically, a time series consists of four components. The variation of those components causes the change in the pattern of the time series. These components are:

  • Level: This is the primary value that averages over time.
  • Trend: The trend is the value that causes increasing or decreasing patterns in a time series.
  • Seasonality: This is a cyclical event that occurs in a time series for a short time and causes short-term increasing or decreasing patterns in a time series.
  • Residual/Noise: These are the random variations in the time series.

Combining these components over time leads to the formation of a time series. Most time series consist of level and noise/residual and trend or seasonality are optional values.

If seasonality and trend are part of the time series, then there will be effects on the forecast value. As the pattern of the forecasted time series may be different from the previous time series.

The combination of the components in time series can be of two types: * Additive * multiplicative

Additive time series

If the components of the time series are added to make the time series. Then the time series is called the additive time series. By visualization, we can say that the time series is additive if the increasing or decreasing pattern of the time series is similar throughout the series. The mathematical function of any additive time series can be represented by: y(t)=level+Trend+seasonality+noisey(t) = level + Trend + seasonality + noise

Multiplicative time series

If the components of the time series are multiplicative together, then the time series is called a multiplicative time series. For visualization, if the time series is having exponential growth or decline with time, then the time series can be considered as the multiplicative time series. The mathematical function of the multiplicative time series can be represented as.

y(t)=LevelTrendseasonalityNoisey(t) = Level * Trend * seasonality * Noise

from statsmodels.tsa.seasonal import seasonal_decompose 
a = seasonal_decompose(df["y"], model = "add", period=12)
a.plot();

Split the data into training and testing

Let’s divide our data into sets 1. Data to train our AutoArima model 2. Data to test our model

For the test data we will use the last 12 months to test and evaluate the performance of our model.

Y_train_df = df[df.ds<='2016-08-01'] 
Y_test_df = df[df.ds>'2016-08-01']
Y_train_df.shape, Y_test_df.shape
((536, 3), (12, 3))

Now let’s plot the training data and the test data.

sns.lineplot(Y_train_df,x="ds", y="y", label="Train")
sns.lineplot(Y_test_df, x="ds", y="y", label="Test")
plt.show()

Implementation of AutoArima with StatsForecast

To also know more about the parameters of the functions of the AutoARIMA Model, they are listed below. For more information, visit the documentation

    d : Optional[int]
        Order of first-differencing.
    D : Optional[int]
        Order of seasonal-differencing.
    max_p : int
        Max autorregresives p.
    max_q : int
        Max moving averages q.
    max_P : int
        Max seasonal autorregresives P.
    max_Q : int
        Max seasonal moving averages Q.
    max_order : int
        Max p+q+P+Q value if not stepwise selection.
    max_d : int
        Max non-seasonal differences.
    max_D : int
        Max seasonal differences.
    start_p : int
        Starting value of p in stepwise procedure.
    start_q : int
        Starting value of q in stepwise procedure.
    start_P : int
        Starting value of P in stepwise procedure.
    start_Q : int
        Starting value of Q in stepwise procedure.
    stationary : bool
        If True, restricts search to stationary models.
    seasonal : bool
        If False, restricts search to non-seasonal models.
    ic : str
        Information criterion to be used in model selection.
    stepwise : bool
        If True, will do stepwise selection (faster).
    nmodels : int
        Number of models considered in stepwise search.
    trace : bool
        If True, the searched ARIMA models is reported.
    approximation : Optional[bool]
        If True, conditional sums-of-squares estimation, final MLE.
    method : Optional[str]
        Fitting method between maximum likelihood or sums-of-squares.
    truncate : Optional[int]
        Observations truncated series used in model selection.
    test : str
        Unit root test to use. See `ndiffs` for details.
    test_kwargs : Optional[str]
        Unit root test additional arguments.
    seasonal_test : str
        Selection method for seasonal differences.
    seasonal_test_kwargs : Optional[dict]
        Seasonal unit root test arguments.
    allowdrift : bool (default True)
        If True, drift models terms considered.
    allowmean : bool (default True)
        If True, non-zero mean models considered.
    blambda : Optional[float]
        Box-Cox transformation parameter.
    biasadj : bool
        Use adjusted back-transformed mean Box-Cox.
    season_length : int
        Number of observations per unit of time. Ex: 24 Hourly data.
    alias : str
        Custom name of the model.
    prediction_intervals : Optional[ConformalIntervals]
        Information to compute conformal prediction intervals.
        By default, the model will compute the native prediction
        intervals.

Load libraries

from statsforecast import StatsForecast
from statsforecast.models import AutoARIMA
from statsforecast.arima import arima_string

Instantiating Model

Import and instantiate the models. Setting the argument is sometimes tricky. This article on Seasonal periods) by the master, Rob Hyndmann, can be useful.season_length

season_length = 12 # Monthly data 
horizon = len(Y_test_df) # number of predictions

models = [AutoARIMA(season_length=season_length)]

We fit the models by instantiating a new StatsForecast object with the following parameters:

models: a list of models. Select the models you want from models and import them.

  • freq: a string indicating the frequency of the data. (See panda’s available frequencies.)

  • n_jobs: n_jobs: int, number of jobs used in the parallel processing, use -1 for all cores.

  • fallback_model: a model to be used if a model fails.

Any settings are passed into the constructor. Then you call its fit method and pass in the historical data frame.

sf = StatsForecast(df=Y_train_df,
                   models=models,
                   freq='MS', 
                   n_jobs=-1)

Fit the Model

sf.fit()
StatsForecast(models=[AutoARIMA])

Once we have entered our model, we can use the arima_string function to see the parameters that the model has found.

arima_string(sf.fitted_[0,0].model_)
'ARIMA(1,0,0)(0,1,2)[12]                   '

The automation process gave us that the best model found is a model of the form ARIMA(1,0,0)(0,1,2)[12], this means that our model contains p=1p=1 , that is, it has a non-seasonal autogressive element, on the other hand, our model contains a seasonal part, which has an order of D=1D=1, that is, it has a seasonal differential, and q=2q=2 that contains 2 moving average element.

To know the values of the terms of our model, we can use the following statement to know all the result of the model made.

result=sf.fitted_[0,0].model_
print(result.keys())
print(result['arma'])
dict_keys(['coef', 'sigma2', 'var_coef', 'mask', 'loglik', 'aic', 'arma', 'residuals', 'code', 'n_cond', 'nobs', 'model', 'bic', 'aicc', 'ic', 'xreg', 'x', 'lambda'])
(1, 0, 0, 2, 12, 0, 1)

Let us now visualize the residuals of our models.

As we can see, the result obtained above has an output in a dictionary, to extract each element from the dictionary we are going to use the .get() function to extract the element and then we are going to save it in a pd.DataFrame().

residual=pd.DataFrame(result.get("residuals"), columns=["residual Model"])
residual
residual Model
00.085694
10.071820
20.066023
5331.258873
5341.585062
535-6.199166
fig, axs = plt.subplots(nrows=2, ncols=2)

# plot[1,1]
residual.plot(ax=axs[0,0])
axs[0,0].set_title("Residuals");

# plot
sns.distplot(residual, ax=axs[0,1]);
axs[0,1].set_title("Density plot - Residual");

# plot
stats.probplot(residual["residual Model"], dist="norm", plot=axs[1,0])
axs[1,0].set_title('Plot Q-Q')

# plot
plot_acf(residual,  lags=35, ax=axs[1,1],color="fuchsia")
axs[1,1].set_title("Autocorrelation");

plt.show();

To generate forecasts we only have to use the predict method specifying the forecast horizon (h). In addition, to calculate prediction intervals associated to the forecasts, we can include the parameter level that receives a list of levels of the prediction intervals we want to build. In this case we will only calculate the 90% forecast interval (level=[90]).

Forecast Method

If you want to gain speed in productive settings where you have multiple series or models we recommend using the StatsForecast.forecast method instead of .fit and .predict.

The main difference is that the .forecast doest not store the fitted values and is highly scalable in distributed environments.

The forecast method takes two arguments: forecasts next h (horizon) and level.

  • h (int): represents the forecast h steps into the future. In this case, 12 months ahead.

  • level (list of floats): this optional parameter is used for probabilistic forecasting. Set the level (or confidence percentile) of your prediction interval. For example, level=[90] means that the model expects the real value to be inside that interval 90% of the times.

The forecast object here is a new data frame that includes a column with the name of the model and the y hat values, as well as columns for the uncertainty intervals. Depending on your computer, this step should take around 1min. (If you want to speed things up to a couple of seconds, remove the AutoModels like ARIMA and Theta)

Y_hat_df = sf.forecast(horizon, fitted=True)

Y_hat_df.head()
dsAutoARIMA
unique_id
12016-09-01109.955437
12016-10-01121.920509
12016-11-01122.458389
12016-12-01120.562027
12017-01-01106.864670
values=sf.forecast_fitted_values()
values
dsyAutoARIMA
unique_id
11972-01-0185.69450485.608803
11972-02-0171.82000071.748177
11972-03-0166.02290365.956879
12016-06-01102.404404101.145523
12016-07-01102.951202101.366135
12016-08-01104.697701110.896866

Adding 95% confidence interval with the forecast method

sf.forecast(h=12, level=[95])
dsAutoARIMAAutoARIMA-lo-95AutoARIMA-hi-95
unique_id
12016-09-01109.955437102.116188117.794685
12016-10-01121.920509112.380608131.460403
12016-11-01122.458389112.200500132.716278
12017-06-0196.75116085.873802107.628525
12017-07-0197.45160786.572372108.330833
12017-08-01103.42061692.540489114.300743
Y_hat_df=Y_hat_df.reset_index()
Y_hat_df
unique_iddsAutoARIMA
012016-09-01109.955437
112016-10-01121.920509
212016-11-01122.458389
912017-06-0196.751160
1012017-07-0197.451607
1112017-08-01103.420616
Y_test_df['unique_id'] = Y_test_df['unique_id'].astype(int)
Y_hat_df = Y_test_df.merge(Y_hat_df, how='left', on=['unique_id', 'ds'])

fig, ax = plt.subplots(1, 1, figsize = (18, 7))
plot_df = pd.concat([Y_train_df, Y_hat_df]).set_index('ds')
plot_df[['y', 'AutoARIMA']].plot(ax=ax, linewidth=2)
ax.set_title(' Forecast', fontsize=22)
ax.set_ylabel('Monthly ', fontsize=20)
ax.set_xlabel('Timestamp [t]', fontsize=20)
ax.legend(prop={'size': 15})
ax.grid()

Predict method with confidence interval

To generate forecasts use the predict method.

The predict method takes two arguments: forecasts the next h (for horizon) and level.

  • h (int): represents the forecast h steps into the future. In this case, 12 months ahead.

  • level (list of floats): this optional parameter is used for probabilistic forecasting. Set the level (or confidence percentile) of your prediction interval. For example, level=[95] means that the model expects the real value to be inside that interval 95% of the times.

The forecast object here is a new data frame that includes a column with the name of the model and the y hat values, as well as columns for the uncertainty intervals.

This step should take less than 1 second.

sf.predict(h=12)
dsAutoARIMA
unique_id
12016-09-01109.955437
12016-10-01121.920509
12016-11-01122.458389
12017-06-0196.751160
12017-07-0197.451607
12017-08-01103.420616
forecast_df = sf.predict(h=12, level = [80, 95]) 

forecast_df
dsAutoARIMAAutoARIMA-lo-95AutoARIMA-lo-80AutoARIMA-hi-80AutoARIMA-hi-95
unique_id
12016-09-01109.955437102.116188104.829628115.081245117.794685
12016-10-01121.920509112.380608115.682701128.158310131.460403
12016-11-01122.458389112.200500115.751114129.165665132.716278
12017-06-0196.75116085.87380289.638840103.863487107.628525
12017-07-0197.45160786.57237290.338058104.565147108.330833
12017-08-01103.42061692.54048996.306480110.534752114.300743

We can join the forecast result with the historical data using the pandas function pd.concat(), and then be able to use this result for graphing.

df_plot=pd.concat([df, forecast_df]).set_index('ds').tail(220)
df_plot
yunique_idAutoARIMAAutoARIMA-lo-95AutoARIMA-lo-80AutoARIMA-hi-80AutoARIMA-hi-95
ds
2000-05-01108.72021NaNNaNNaNNaNNaN
2000-06-01114.20711NaNNaNNaNNaNNaN
2000-07-01111.87371NaNNaNNaNNaNNaN
2017-06-01NaNNaN96.75116085.87380289.638840103.863487107.628525
2017-07-01NaNNaN97.45160786.57237290.338058104.565147108.330833
2017-08-01NaNNaN103.42061692.54048996.306480110.534752114.300743

Now let’s visualize the result of our forecast and the historical data of our time series, also let’s draw the confidence interval that we have obtained when making the prediction with 95% confidence.

fig, ax = plt.subplots(1, 1, figsize = (20, 8))

plt.plot(df_plot['y'], 'k--', df_plot['AutoARIMA'], 'b-', linewidth=2 ,label="y")
plt.plot(df_plot['AutoARIMA'], 'b-',  color="red", linewidth=2, label="AutoArima")

# Specify graph features:
ax.fill_between(df_plot.index, 
                df_plot['AutoARIMA-lo-80'], 
                df_plot['AutoARIMA-hi-80'],
                alpha=.20,
                color='lime',
                label='AutoARIMA_level_80')
ax.fill_between(df_plot.index, 
                df_plot['AutoARIMA-lo-95'], 
                df_plot['AutoARIMA-hi-95'],
                alpha=.2,
                color='white',
                label='AutoARIMA_level_95')
ax.set_title('', fontsize=20)
ax.set_ylabel('Production', fontsize=15)
ax.set_xlabel('Month', fontsize=15)
ax.legend(prop={'size': 15})
ax.grid(True)
plt.show()

Let’s plot the same graph using the plot function that comes in Statsforecast, as shown below.

sf.plot(df, forecast_df, level=[95])

Cross-validation

In previous steps, we’ve taken our historical data to predict the future. However, to asses its accuracy we would also like to know how the model would have performed in the past. To assess the accuracy and robustness of your models on your data perform Cross-Validation.

With time series data, Cross Validation is done by defining a sliding window across the historical data and predicting the period following it. This form of cross-validation allows us to arrive at a better estimation of our model’s predictive abilities across a wider range of temporal instances while also keeping the data in the training set contiguous as is required by our models.

The following graph depicts such a Cross Validation Strategy:

Perform time series cross-validation

Cross-validation of time series models is considered a best practice but most implementations are very slow. The statsforecast library implements cross-validation as a distributed operation, making the process less time-consuming to perform. If you have big datasets you can also perform Cross Validation in a distributed cluster using Ray, Dask or Spark.

In this case, we want to evaluate the performance of each model for the last 5 months (n_windows=5), forecasting every second months (step_size=12). Depending on your computer, this step should take around 1 min.

The cross_validation method from the StatsForecast class takes the following arguments.

  • df: training data frame

  • h (int): represents h steps into the future that are being forecasted. In this case, 12 months ahead.

  • step_size (int): step size between each window. In other words: how often do you want to run the forecasting processes.

  • n_windows(int): number of windows used for cross validation. In other words: what number of forecasting processes in the past do you want to evaluate.

crossvalidation_df = sf.cross_validation(df=Y_train_df,
                                         h=12,
                                         step_size=12,
                                         n_windows=5)

The crossvaldation_df object is a new data frame that includes the following columns:

  • unique_id: index. If you dont like working with index just run crossvalidation_df.resetindex()
  • ds: datestamp or temporal index
  • cutoff: the last datestamp or temporal index for the n_windows.
  • y: true value
  • "model": columns with the model’s name and fitted value.
crossvalidation_df.head()
dscutoffyAutoARIMA
unique_id
12011-09-012011-08-0193.906197104.758850
12011-10-012011-08-01116.763397118.705879
12011-11-012011-08-01116.825798116.834129
12011-12-012011-08-01114.956299117.070084
12012-01-012011-08-0199.966202103.552246

Model Evaluation

We can now compute the accuracy of the forecast using an appropiate accuracy metric. Here we’ll use the Root Mean Squared Error (RMSE). To do this, we first need to install datasetsforecast, a Python library developed by Nixtla that includes a function to compute the RMSE.

!pip install datasetsforecast
from datasetsforecast.losses import rmse

The function to compute the RMSE takes two arguments:

  1. The actual values.
  2. The forecasts, in this case, AutoArima.
rmse = rmse(crossvalidation_df['y'], crossvalidation_df["AutoARIMA"])
print("RMSE using cross-validation: ", rmse)
RMSE using cross-validation:  5.5258384

As you have noticed, we have used the cross validation results to perform the evaluation of our model.

Now we are going to evaluate our model with the results of the predictions, we will use different types of metrics MAE, MAPE, MASE, RMSE, SMAPE to evaluate the accuracy.

from datasetsforecast.losses import mae, mape, mase, rmse, smape
def evaluate_performace(y_hist, y_true, model):
    evaluation = {}
    evaluation[model] = {}
    for metric in [mase, mae, mape, rmse, smape]:
        metric_name = metric.__name__
        if metric_name == 'mase':
            evaluation[model][metric_name] = metric(y_true['y'].values, 
                                                y_true[model].values, 
                                                y_hist['y'].values, seasonality=12)
        else:
            evaluation[model][metric_name] = metric(y_true['y'].values, y_true[model].values)
    return pd.DataFrame(evaluation).T
evaluate_performace(Y_train_df, Y_hat_df, model='AutoARIMA')
maemapemasermsesmape
AutoARIMA5.260424.7943121.0153796.0212644.915602

Acknowledgements

We would like to thank Naren Castellon for writing this tutorial.

References

  1. Nixtla-Arima

  2. Rob J. Hyndman and George Athanasopoulos (2018). “Forecasting principles and practice, Time series cross-validation”.