Table of Contents

Introduction

Simple Exponential Smoothing Optimized (SES Optimized) is a forecasting model used to predict future values in univariate time series. It is a variant of the simple exponential smoothing (SES) method that uses an optimization approach to estimate the model parameters more accurately.

The SES Optimized method uses a single smoothing parameter to estimate the trend and seasonality in the time series data. The model attempts to minimize the mean squared error (MSE) between the predictions and the actual values in the training sample using an optimization algorithm.

The SES Optimized approach is especially useful for time series with strong trend and seasonality patterns, or for time series with noisy data. However, it is important to note that this model assumes that the time series is stationary and that the variation in the data is random and there are no non-random patterns in the data. If these assumptions are not met, the SES Optimized model may not perform well and another forecasting method may be required.

Simple Exponential Smoothing Model

The simplest of the exponentially smoothing methods is naturally called simple exponential smoothing (SES). This method is suitable for forecasting data with no clear trend or seasonal pattern.

Using the naïve method, all forecasts for the future are equal to the last observed value of the series, y^T+hT=yT,\hat{y}_{T+h|T} = y_{T},

for h=1,2,h=1,2,\dots. Hence, the naïve method assumes that the most recent observation is the only important one, and all previous observations provide no information for the future. This can be thought of as a weighted average where all of the weight is given to the last observation.

Using the average method, all future forecasts are equal to a simple average of the observed data, y^T+hT=1Tt=1Tyt,\hat{y}_{T+h|T} = \frac1T \sum_{t=1}^T y_t,

for h=1,2,h=1,2,\dots Hence, the average method assumes that all observations are of equal importance, and gives them equal weights when generating forecasts.

We often want something between these two extremes. For example, it may be sensible to attach larger weights to more recent observations than to observations from the distant past. This is exactly the concept behind simple exponential smoothing. Forecasts are calculated using weighted averages, where the weights decrease exponentially as observations come from further in the past — the smallest weights are associated with the oldest observations:

where 0α10 \le \alpha \le 1 is the smoothing parameter. The one-step-ahead forecast for time T+1T+1 is a weighted average of all of the observations in the series y1,,yTy_1,\dots,y_T. The rate at which the weights decrease is controlled by the parameter α\alpha.

For any α\alpha between 0 and 1, the weights attached to the observations decrease exponentially as we go back in time, hence the name “exponential smoothing”. If α\alpha is small (i.e., close to 0), more weight is given to observations from the more distant past. If α\alpha is large (i.e., close to 1), more weight is given to the more recent observations. For the extreme case where α=1\alpha=1, y^T+1T=yT\hat{y}_{T+1|T}=y_T and the forecasts are equal to the naïve forecasts.

Optimisation

The application of every exponential smoothing method requires the smoothing parameters and the initial values to be chosen. In particular, for simple exponential smoothing, we need to select the values of α\alpha and 0\ell_0 . All forecasts can be computed from the data once we know those values. For the methods that follow there is usually more than one smoothing parameter and more than one initial component to be chosen.

In some cases, the smoothing parameters may be chosen in a subjective manner — the forecaster specifies the value of the smoothing parameters based on previous experience. However, a more reliable and objective way to obtain values for the unknown parameters is to estimate them from the observed data.

From regression models we estimated the coefficients of a regression model by minimising the sum of the squared residuals (usually known as SSE or “sum of squared errors”). Similarly, the unknown parameters and the initial values for any exponential smoothing method can be estimated by minimising the SSE. The residuals are specified as et=yty^tt1e_t=y_t - \hat{y}_{t|t-1} for t=1,,Tt=1,\dots,T. Hence, we find the values of the unknown parameters and the initial values that minimise

Unlike the regression case (where we have formulas which return the values of the regression coefficients that minimise the SSE), this involves a non-linear minimisation problem, and we need to use an optimisation tool to solve it.

Loading libraries and data

Tip

Statsforecast will be needed. To install, see instructions.

Next, we import plotting libraries and configure the plotting style.

import matplotlib.pyplot as plt
import seaborn as sns
from statsmodels.graphics.tsaplots import plot_acf, plot_pacf
plt.style.use('grayscale') # fivethirtyeight  grayscale  classic
plt.rcParams['lines.linewidth'] = 1.5
dark_style = {
    'figure.facecolor': '#008080',  # #212946
    'axes.facecolor': '#008080',
    'savefig.facecolor': '#008080',
    'axes.grid': True,
    'axes.grid.which': 'both',
    'axes.spines.left': False,
    'axes.spines.right': False,
    'axes.spines.top': False,
    'axes.spines.bottom': False,
    'grid.color': '#000000',  #2A3459
    'grid.linewidth': '1',
    'text.color': '0.9',
    'axes.labelcolor': '0.9',
    'xtick.color': '0.9',
    'ytick.color': '0.9',
    'font.size': 12 }
plt.rcParams.update(dark_style)


from pylab import rcParams
rcParams['figure.figsize'] = (18,7)

Read Data

import pandas as pd
df=pd.read_csv("https://raw.githubusercontent.com/Naren8520/Serie-de-tiempo-con-Machine-Learning/main/Data/ads.csv")
df.head()
TimeAds
02017-09-13T00:00:0080115
12017-09-13T01:00:0079885
22017-09-13T02:00:0089325
32017-09-13T03:00:00101930
42017-09-13T04:00:00121630

The input to StatsForecast is always a data frame in long format with three columns: unique_id, ds and y:

  • The unique_id (string, int or category) represents an identifier for the series.

  • The ds (datestamp) column should be of a format expected by Pandas, ideally YYYY-MM-DD for a date or YYYY-MM-DD HH:MM:SS for a timestamp.

  • The y (numeric) represents the measurement we wish to forecast.

df["unique_id"]="1"
df.columns=["ds", "y", "unique_id"]
df.head()
dsyunique_id
02017-09-13T00:00:00801151
12017-09-13T01:00:00798851
22017-09-13T02:00:00893251
32017-09-13T03:00:001019301
42017-09-13T04:00:001216301
print(df.dtypes)
ds           object
y             int64
unique_id    object
dtype: object

We can see that our time variable (ds) is in an object format, we need to convert to a date format

df["ds"] = pd.to_datetime(df["ds"])

Explore Data with the plot method

Plot some series using the plot method from the StatsForecast class. This method prints a random series from the dataset and is useful for basic EDA.

from statsforecast import StatsForecast

StatsForecast.plot(df)

Autocorrelation plots

fig, axs = plt.subplots(nrows=1, ncols=2)

plot_acf(df["y"],  lags=30, ax=axs[0],color="fuchsia")
axs[0].set_title("Autocorrelation");

plot_pacf(df["y"],  lags=30, ax=axs[1],color="lime")
axs[1].set_title('Partial Autocorrelation')

plt.show();

Split the data into training and testing

Let’s divide our data into sets

  1. Data to train our Simple Exponential Smoothing Optimized Model
  2. Data to test our model

For the test data we will use the last 30 Hours to test and evaluate the performance of our model.

train = df[df.ds<='2017-09-20 17:00:00'] 
test = df[df.ds>'2017-09-20 17:00:00']
train.shape, test.shape
((186, 3), (30, 3))

Implementation of SimpleExponentialSmoothingOptimized with StatsForecast

Load libraries

from statsforecast import StatsForecast
from statsforecast.models import SimpleExponentialSmoothingOptimized

Instantiating Model

horizon = len(test) # number of predictions

models = [SimpleExponentialSmoothingOptimized()] # multiplicative   additive

We fit the models by instantiating a new StatsForecast object with the following parameters:

models: a list of models. Select the models you want from models and import them.

  • freq: a string indicating the frequency of the data. (See panda’s available frequencies.)

  • n_jobs: n_jobs: int, number of jobs used in the parallel processing, use -1 for all cores.

  • fallback_model: a model to be used if a model fails.

Any settings are passed into the constructor. Then you call its fit method and pass in the historical data frame.

sf = StatsForecast(models=models, freq='h')

Fit the Model

sf.fit(df=train)
StatsForecast(models=[SESOpt])

Let’s see the results of our Simple Exponential Smoothing Optimized model. We can observe it with the following instruction:

result=sf.fitted_[0,0].model_
result
{'mean': array([139526.04792941]),
 'fitted': array([       nan,  80115.   ,  79887.3  ,  89230.625, 101803.01 ,
        121431.73 , 116524.57 , 106595.3  , 102833.   , 108002.78 ,
        116043.78 , 130880.14 , 148838.6  , 157502.48 , 150782.88 ,
        149309.88 , 150092.1  , 144833.12 , 150631.44 , 163707.92 ,
        166209.73 , 139786.89 , 106233.92 ,  96874.54 ,  82663.55 ,
         80150.38 ,  75383.16 ,  85007.78 , 101909.28 , 124902.74 ,
        118098.73 , 109313.734, 102543.39 , 102243.03 , 115704.03 ,
        130391.64 , 144185.67 , 148922.16 , 149147.72 , 148051.08 ,
        148802.4  , 149819.72 , 150562.5  , 149451.22 , 150509.31 ,
        129343.8  , 104070.29 ,  92293.95 ,  82860.29 ,  76380.45 ,
         75142.51 ,  82565.02 ,  88732.7  , 118133.02 , 115219.43 ,
        110982.8  ,  98981.23 , 104132.96 , 108619.68 , 126459.8  ,
        140295.25 , 152348.25 , 146335.73 , 148003.16 , 147737.69 ,
        145769.88 , 149249.84 , 159620.25 , 161070.36 , 135775.5  ,
        113173.305, 100329.734,  87742.15 ,  87834.07 ,  88834.89 ,
         92314.85 , 104343.5  , 115824.03 , 128818.74 , 141259.34 ,
        144408.19 , 143261.58 , 133290.72 , 131260.5  , 142367.81 ,
        157224.92 , 152547.25 , 153723.12 , 151220.28 , 150650.75 ,
        147467.16 , 152474.42 , 146931.   , 125461.86 , 118000.37 ,
         96913.   ,  93643.03 ,  89105.83 ,  89342.61 ,  90562.68 ,
         98212.73 , 112426.43 , 129299.56 , 141283.95 , 152447.23 ,
        152578.67 , 141284.1  , 147487.34 , 160973.77 , 166281.39 ,
        166775.02 , 163176.34 , 157363.72 , 159038.1  , 160010.19 ,
        168261.66 , 169883.61 , 142981.73 , 113255.266,  97504.1  ,
         81833.29 ,  79533.234,  78361.836,  87948.17 ,  99671.58 ,
        123538.914, 111447.14 ,  99560.07 ,  97674.05 ,  97655.19 ,
        102515.9  , 119755.86 , 135595.02 , 140074.75 , 141713.45 ,
        142214.94 , 145328.55 , 145334.94 , 150359.25 , 161408.39 ,
        153494.94 , 134907.75 , 107343.43 ,  95167.984,  79671.53 ,
         78348.37 ,  74706.78 ,  81917.164,  97789.67 , 119129.445,
        113175.14 ,  99022.95 ,  94050.23 ,  93663.9  , 104079.79 ,
        119593.3  , 135826.03 , 146348.7  , 139236.84 , 147145.12 ,
        144957.1  , 151305.88 , 156032.27 , 161331.47 , 164973.22 ,
        134398.83 , 105873.14 ,  92985.18 ,  79407.15 ,  79974.27 ,
         78128.64 ,  85708.44 ,  99866.984, 123639.87 , 116408.05 ,
        104411.18 , 101469.71 ,  97673.34 , 108159.086, 121119.09 ,
        140652.69 , 138575.98 , 140965.86 , 141519.4  , 141589.3  ,
        140619.8  ], dtype=float32)}

Let us now visualize the residuals of our models.

As we can see, the result obtained above has an output in a dictionary, to extract each element from the dictionary we are going to use the .get() function to extract the element and then we are going to save it in a pd.DataFrame().

fitted=pd.DataFrame(result.get("fitted"), columns=["fitted"])
fitted["ds"]=df["ds"]
fitted
fittedds
0NaN2017-09-13 00:00:00
180115.0000002017-09-13 01:00:00
279887.2968752017-09-13 02:00:00
183141519.4062502017-09-20 15:00:00
184141589.2968752017-09-20 16:00:00
185140619.7968752017-09-20 17:00:00
sns.lineplot(df, x="ds", y="y", label="Actual", linewidth=2)
sns.lineplot(fitted,x="ds", y="fitted", label="Fitted", linestyle="--" )

plt.title("Ads watched (hourly data)");
plt.show()

Forecast Method

If you want to gain speed in productive settings where you have multiple series or models we recommend using the StatsForecast.forecast method instead of .fit and .predict.

The main difference is that the .forecast doest not store the fitted values and is highly scalable in distributed environments.

The forecast method takes two arguments: forecasts next h (horizon) and level.

  • h (int): represents the forecast h steps into the future. In this case, 30 hors ahead.

The forecast object here is a new data frame that includes a column with the name of the model and the y hat values, as well as columns for the uncertainty intervals. Depending on your computer, this step should take around 1min.

# Prediction
Y_hat = sf.forecast(df=train, h=horizon, fitted=True)
Y_hat
unique_iddsSESOpt
012017-09-20 18:00:00139526.046875
112017-09-20 19:00:00139526.046875
212017-09-20 20:00:00139526.046875
2712017-09-21 21:00:00139526.046875
2812017-09-21 22:00:00139526.046875
2912017-09-21 23:00:00139526.046875

Let’s visualize the fitted values

values=sf.forecast_fitted_values()
values.head()
unique_iddsySESOpt
012017-09-13 00:00:0080115.0NaN
112017-09-13 01:00:0079885.080115.000000
212017-09-13 02:00:0089325.079887.296875
312017-09-13 03:00:00101930.089230.625000
412017-09-13 04:00:00121630.0101803.007812

Predict method with confidence interval

To generate forecasts use the predict method.

The predict method takes two arguments: forecasts the next h (for horizon) and level.

  • h (int): represents the forecast h steps into the future. In this case, 30 hours ahead.

The forecast object here is a new data frame that includes a column with the name of the model and the y hat values, as well as columns for the uncertainty intervals.

This step should take less than 1 second.

forecast_df = sf.predict(h=horizon) 
forecast_df
unique_iddsSESOpt
012017-09-20 18:00:00139526.046875
112017-09-20 19:00:00139526.046875
212017-09-20 20:00:00139526.046875
2712017-09-21 21:00:00139526.046875
2812017-09-21 22:00:00139526.046875
2912017-09-21 23:00:00139526.046875
sf.plot(train, forecast_df)

Cross-validation

In previous steps, we’ve taken our historical data to predict the future. However, to asses its accuracy we would also like to know how the model would have performed in the past. To assess the accuracy and robustness of your models on your data perform Cross-Validation.

With time series data, Cross Validation is done by defining a sliding window across the historical data and predicting the period following it. This form of cross-validation allows us to arrive at a better estimation of our model’s predictive abilities across a wider range of temporal instances while also keeping the data in the training set contiguous as is required by our models.

The following graph depicts such a Cross Validation Strategy:

Perform time series cross-validation

Cross-validation of time series models is considered a best practice but most implementations are very slow. The statsforecast library implements cross-validation as a distributed operation, making the process less time-consuming to perform. If you have big datasets you can also perform Cross Validation in a distributed cluster using Ray, Dask or Spark.

In this case, we want to evaluate the performance of each model for the last 5 months (n_windows=), forecasting every second months (step_size=12). Depending on your computer, this step should take around 1 min.

The cross_validation method from the StatsForecast class takes the following arguments.

  • df: training data frame

  • h (int): represents h steps into the future that are being forecasted. In this case, 30 hours ahead.

  • step_size (int): step size between each window. In other words: how often do you want to run the forecasting processes.

  • n_windows(int): number of windows used for cross validation. In other words: what number of forecasting processes in the past do you want to evaluate.

crossvalidation_df = sf.cross_validation(df=df,
                                         h=horizon,
                                         step_size=30,
                                         n_windows=3)

The crossvaldation_df object is a new data frame that includes the following columns:

  • unique_id: index. If you dont like working with index just run crossvalidation_df.resetindex().
  • ds: datestamp or temporal index
  • cutoff: the last datestamp or temporal index for the n_windows.
  • y: true value
  • model: columns with the model’s name and fitted value.
crossvalidation_df
unique_iddscutoffySESOpt
012017-09-18 06:00:002017-09-18 05:00:0099440.0111447.140625
112017-09-18 07:00:002017-09-18 05:00:0097655.0111447.140625
212017-09-18 08:00:002017-09-18 05:00:0097655.0111447.140625
8712017-09-21 21:00:002017-09-20 17:00:00103080.0139526.046875
8812017-09-21 22:00:002017-09-20 17:00:0095155.0139526.046875
8912017-09-21 23:00:002017-09-20 17:00:0080285.0139526.046875

Model Evaluation

Now we are going to evaluate our model with the results of the predictions, we will use different types of metrics MAE, MAPE, MASE, RMSE, SMAPE to evaluate the accuracy.

from functools import partial

import utilsforecast.losses as ufl
from utilsforecast.evaluation import evaluate
evaluate(
    test.merge(Y_hat),
    metrics=[ufl.mae, ufl.mape, partial(ufl.mase, seasonality=24), ufl.rmse, ufl.smape],
    train_df=train,
)
unique_idmetricSESOpt
01mae29230.182292
11mape0.314203
21mase3.611444
31rmse35866.963426
41smape0.124271

Acknowledgements

We would like to thank Naren Castellon for writing this tutorial.

References

  1. Changquan Huang • Alla Petukhina. Springer series (2022). Applied Time Series Analysis and Forecasting with Python.
  2. Ivan Svetunkov. Forecasting and Analytics with the Augmented Dynamic Adaptive Model (ADAM)
  3. James D. Hamilton. Time Series Analysis Princeton University Press, Princeton, New Jersey, 1st Edition, 1994.
  4. Nixtla Parameters.
  5. Pandas available frequencies.
  6. Rob J. Hyndman and George Athanasopoulos (2018). “Forecasting principles and practice, Time series cross-validation”..
  7. Seasonal periods- Rob J Hyndman.