Seasonal Exponential Smoothing Optimized Model
Step-by-step guide on using the SeasonalExponentialSmoothingOptimized Model
with Statsforecast
.
Table of Contents
- Introduction
- Seasonal Exponential Smoothing Optimized Model
- Loading libraries and data
- Explore data with the plot method
- Split the data into training and testing
- Implementation of SeasonalExponentialSmoothingOptimized with StatsForecast
- Cross-validation
- Model evaluation
- References
Introduction
The Seasonal Exponential Smoothing Optimized (SESO) model is a forecasting technique used to predict future values of a time series that exhibits seasonal patterns. It is a variant of the exponential smoothing method, which uses a combination of past and predicted values to generate a prediction.
The SESO algorithm uses an optimization approach to find the optimal values of the seasonal exponential smoothing parameters. These parameters include the smoothing coefficients for the levels, trends, and seasonal components of the time series.
The SESO model is particularly useful for forecasting time series with pronounced seasonal patterns, such as seasonal product sales or seasonal temperatures, and many other areas. By using SESO, accurate and useful forecasts can be generated for business planning and decision making.
Seasonal Exponential Smoothing Model
The SESO model is based on the exponential smoothing method, which uses a combination of past and predicted values to generate a prediction. The mathematical formula for the SESO model is as follows:
Where: - is the forecast for the next period of the season . - is the smoothing parameter that is optimized by minimizing the squared error. - is the current observation of station in period . - is the forecast for the previous period of the season .
The equation indicates that the forecast value for the next season period is calculated as a weighted combination of the current observation and the previous forecast for the same station. The smoothing parameter controls the relative influence of these two terms on the final prediction. A high value of α gives more weight to the current observation and less weight to the previous forecast, making the model more sensitive to recent changes in the time series. A low value of , on the other hand, gives more weight to the previous forecast and less weight to the current observation, making the model more stable and smooth.
The optimal value of the smoothing parameter is determined by minimizing the squared error between the forecasts generated by the model and the actual values of the time series.
Model selection
Model selection in the context of the SESO model refers to the process of choosing the optimal values of the smoothing parameters and the seasonal component for the model. The optimal values of these parameters are the ones that result in the best forecast performance for the given data set.
A great advantage of the ETS statistical framework is that information criteria can be used for model selection. The and , that also can be used here to determine which of the ETS models is most appropriate for a given time series.
For ETS models, Akaike’s Information Criterion (AIC) is defined as
where is the likelihood of the model and is the total number of parameters and initial states that have been estimated (including the residual variance).
The AIC corrected for small sample bias () is defined as
and the Bayesian Information Criterion (BIC) is
These criteria balance the goodness of fit with the complexity of the model and provide a way to choose the model that maximizes the likelihood of the data while minimizing the number of parameters.
In addition to these techniques, expert judgment and domain knowledge can also be used to select the optimal SESO model. This involves considering the underlying dynamics of the time series, the patterns of seasonality, and any other relevant factors that may influence the choice of the model.
Overall, the process of model selection for the SESO model involves a combination of statistical techniques, information criteria, and expert judgment to identify the optimal values of the smoothing parameters and the seasonal component that result in the best forecast performance for the given data set.
Loading libraries and data
Tip
Statsforecast will be needed. To install, see instructions.
Next, we import plotting libraries and configure the plotting style.
Read Data
Time | Ads | |
---|---|---|
0 | 2017-09-13T00:00:00 | 80115 |
1 | 2017-09-13T01:00:00 | 79885 |
2 | 2017-09-13T02:00:00 | 89325 |
3 | 2017-09-13T03:00:00 | 101930 |
4 | 2017-09-13T04:00:00 | 121630 |
The input to StatsForecast is always a data frame in long format with three columns: unique_id, ds and y:
-
The
unique_id
(string, int or category) represents an identifier for the series. -
The
ds
(datestamp) column should be of a format expected by Pandas, ideally YYYY-MM-DD for a date or YYYY-MM-DD HH:MM:SS for a timestamp. -
The
y
(numeric) represents the measurement we wish to forecast.
ds | y | unique_id | |
---|---|---|---|
0 | 2017-09-13T00:00:00 | 80115 | 1 |
1 | 2017-09-13T01:00:00 | 79885 | 1 |
2 | 2017-09-13T02:00:00 | 89325 | 1 |
3 | 2017-09-13T03:00:00 | 101930 | 1 |
4 | 2017-09-13T04:00:00 | 121630 | 1 |
We can see that our time variable (ds)
is in an object format, we need
to convert to a date format
Explore Data with the plot method
Plot some series using the plot method from the StatsForecast class. This method prints a random series from the dataset and is useful for basic EDA.
The Augmented Dickey-Fuller Test
An Augmented Dickey-Fuller (ADF) test is a type of statistical test that determines whether a unit root is present in time series data. Unit roots can cause unpredictable results in time series analysis. A null hypothesis is formed in the unit root test to determine how strongly time series data is affected by a trend. By accepting the null hypothesis, we accept the evidence that the time series data is not stationary. By rejecting the null hypothesis or accepting the alternative hypothesis, we accept the evidence that the time series data is generated by a stationary process. This process is also known as stationary trend. The values of the ADF test statistic are negative. Lower ADF values indicate a stronger rejection of the null hypothesis.
Augmented Dickey-Fuller Test is a common statistical test used to test whether a given time series is stationary or not. We can achieve this by defining the null and alternate hypothesis.
Null Hypothesis: Time Series is non-stationary. It gives a time-dependent trend. Alternate Hypothesis: Time Series is stationary. In another term, the series doesn’t depend on time.
ADF or t Statistic < critical values: Reject the null hypothesis, time series is stationary. ADF or t Statistic > critical values: Failed to reject the null hypothesis, time series is non-stationary.
Autocorrelation plots
The important characteristics of Autocorrelation (ACF) and Partial Autocorrelation (PACF) are as follows:
Autocorrelation (ACF): 1. Identify patterns of temporal dependence: The ACF shows the correlation between an observation and its lagged values at different time intervals. Helps identify patterns of temporal dependency in a time series, such as the presence of trends or seasonality.
-
Indicates the “memory” of the series: The ACF allows us to determine how much past observations influence future ones. If the ACF shows significant autocorrelations in several lags, it indicates that the series has a long-term memory and that past observations are relevant to predict future ones.
-
Helps identify MA (moving average) models: The shape of the ACF can reveal the presence of moving average components in the time series. Lags where the ACF shows a significant correlation may indicate the order of an MA model.
Partial Autocorrelation (PACF): 1. Identify direct dependence: Unlike the ACF, the PACF eliminates the indirect effects of intermediate lags and measures the direct correlation between an observation and its lagged values. It helps to identify the direct dependence between an observation and its lag values, without the influence of intermediate lags.
-
Helps to identify AR (autoregressive) models: The shape of the PACF can reveal the presence of autoregressive components in the time series. Lags in which the PACF shows a significant correlation may indicate the order of an AR model.
-
Used in conjunction with the ACF: The PACF is used in conjunction with the ACF to determine the order of an AR or MA model. By analyzing both the ACF and the PACF, significant lags can be identified and a model suitable for time series analysis and forecasting can be built.
In summary, the ACF and the PACF are complementary tools in time series analysis that provide information on time dependence and help identify the appropriate components to build forecast models.
Decomposition of the time series
How to decompose a time series and why?
In time series analysis to forecast new values, it is very important to know past data. More formally, we can say that it is very important to know the patterns that values follow over time. There can be many reasons that cause our forecast values to fall in the wrong direction. Basically, a time series consists of four components. The variation of those components causes the change in the pattern of the time series. These components are:
- Level: This is the primary value that averages over time.
- Trend: The trend is the value that causes increasing or decreasing patterns in a time series.
- Seasonality: This is a cyclical event that occurs in a time series for a short time and causes short-term increasing or decreasing patterns in a time series.
- Residual/Noise: These are the random variations in the time series.
Combining these components over time leads to the formation of a time series. Most time series consist of level and noise/residual and trend or seasonality are optional values.
If seasonality and trend are part of the time series, then there will be effects on the forecast value. As the pattern of the forecasted time series may be different from the previous time series.
The combination of the components in time series can be of two types: * Additive * Multiplicative
Additive time series
If the components of the time series are added to make the time series. Then the time series is called the additive time series. By visualization, we can say that the time series is additive if the increasing or decreasing pattern of the time series is similar throughout the series. The mathematical function of any additive time series can be represented by:
Multiplicative time series
If the components of the time series are multiplicative together, then the time series is called a multiplicative time series. For visualization, if the time series is having exponential growth or decline with time, then the time series can be considered as the multiplicative time series. The mathematical function of the multiplicative time series can be represented as.
Additive
Multiplicative
Split the data into training and testing
Let’s divide our data into sets
- Data to train our
Seasonal Exponential Smoothing Optimized Model
. - Data to test our model
For the test data we will use the last 30 hours to test and evaluate the performance of our model.
Implementation of SeasonalExponentialSmoothingOptimized with StatsForecast
Load libraries
Building Model
Import and instantiate the models. Setting the argument is sometimes
tricky. This article on Seasonal
periods by the
master, Rob Hyndmann, can be useful for season_length
.
We fit the models by instantiating a new StatsForecast object with the following parameters:
models: a list of models. Select the models you want from models and import them.
-
freq:
a string indicating the frequency of the data. (See panda’s available frequencies.) -
n_jobs:
n_jobs: int, number of jobs used in the parallel processing, use -1 for all cores. -
fallback_model:
a model to be used if a model fails.
Any settings are passed into the constructor. Then you call its fit method and pass in the historical data frame.
Fit the Model
Let’s see the results of our
Seasonal Exponential Smoothing Optimized Model
. We can observe it with
the following instruction:
Let us now visualize the fitted values of our models.
As we can see, the result obtained above has an output in a dictionary,
to extract each element from the dictionary we are going to use the
.get()
function to extract the element and then we are going to save
it in a pd.DataFrame()
.
fitted | ds | |
---|---|---|
0 | NaN | 2017-09-13 00:00:00 |
1 | NaN | 2017-09-13 01:00:00 |
2 | NaN | 2017-09-13 02:00:00 |
… | … | … |
183 | 148833.171875 | 2017-09-20 15:00:00 |
184 | 149860.031250 | 2017-09-20 16:00:00 |
185 | 150673.375000 | 2017-09-20 17:00:00 |
Forecast Method
If you want to gain speed in productive settings where you have multiple
series or models we recommend using the
StatsForecast.forecast
method instead of .fit
and .predict
.
The main difference is that the .forecast
doest not store the fitted
values and is highly scalable in distributed environments.
The forecast method takes two arguments: forecasts next h
(horizon)
and level
.
h (int):
represents the forecast h steps into the future. In this case, 12 months ahead.
The forecast object here is a new data frame that includes a column with the name of the model and the y hat values, as well as columns for the uncertainty intervals. Depending on your computer, this step should take around 1min.
unique_id | ds | SeasESOpt | |
---|---|---|---|
0 | 1 | 2017-09-20 18:00:00 | 161532.046875 |
1 | 1 | 2017-09-20 19:00:00 | 161051.687500 |
2 | 1 | 2017-09-20 20:00:00 | 135531.640625 |
… | … | … | … |
27 | 1 | 2017-09-21 21:00:00 | 105600.390625 |
28 | 1 | 2017-09-21 22:00:00 | 96717.390625 |
29 | 1 | 2017-09-21 23:00:00 | 82608.343750 |
unique_id | ds | y | SeasESOpt | |
---|---|---|---|---|
0 | 1 | 2017-09-13 00:00:00 | 80115.0 | NaN |
1 | 1 | 2017-09-13 01:00:00 | 79885.0 | NaN |
2 | 1 | 2017-09-13 02:00:00 | 89325.0 | NaN |
3 | 1 | 2017-09-13 03:00:00 | 101930.0 | NaN |
4 | 1 | 2017-09-13 04:00:00 | 121630.0 | NaN |
Predict method with confidence interval
To generate forecasts use the predict method.
The predict method takes two arguments: forecasts the next h
(for
horizon) and level
.
h (int):
represents the forecast h steps into the future. In this case, 30 hours ahead.
The forecast object here is a new data frame that includes a column with the name of the model and the y hat values, as well as columns for the uncertainty intervals.
This step should take less than 1 second.
unique_id | ds | SeasESOpt | |
---|---|---|---|
0 | 1 | 2017-09-20 18:00:00 | 161532.046875 |
1 | 1 | 2017-09-20 19:00:00 | 161051.687500 |
2 | 1 | 2017-09-20 20:00:00 | 135531.640625 |
… | … | … | … |
27 | 1 | 2017-09-21 21:00:00 | 105600.390625 |
28 | 1 | 2017-09-21 22:00:00 | 96717.390625 |
29 | 1 | 2017-09-21 23:00:00 | 82608.343750 |
Cross-validation
In previous steps, we’ve taken our historical data to predict the future. However, to asses its accuracy we would also like to know how the model would have performed in the past. To assess the accuracy and robustness of your models on your data perform Cross-Validation.
With time series data, Cross Validation is done by defining a sliding window across the historical data and predicting the period following it. This form of cross-validation allows us to arrive at a better estimation of our model’s predictive abilities across a wider range of temporal instances while also keeping the data in the training set contiguous as is required by our models.
The following graph depicts such a Cross Validation Strategy:
Perform time series cross-validation
Cross-validation of time series models is considered a best practice but most implementations are very slow. The statsforecast library implements cross-validation as a distributed operation, making the process less time-consuming to perform. If you have big datasets you can also perform Cross Validation in a distributed cluster using Ray, Dask or Spark.
In this case, we want to evaluate the performance of each model for the
last 5 months (n_windows=)
, forecasting every second months
(step_size=12)
. Depending on your computer, this step should take
around 1 min.
The cross_validation method from the StatsForecast class takes the following arguments.
-
df:
training data frame -
h (int):
represents h steps into the future that are being forecasted. In this case, 12 months ahead. -
step_size (int):
step size between each window. In other words: how often do you want to run the forecasting processes. -
n_windows(int):
number of windows used for cross validation. In other words: what number of forecasting processes in the past do you want to evaluate.
The crossvaldation_df object is a new data frame that includes the following columns:
unique_id:
series identifier.ds:
datestamp or temporal indexcutoff:
the last datestamp or temporal index for then_windows
.y:
true valuemodel:
columns with the model’s name and fitted value.
unique_id | ds | cutoff | y | SeasESOpt | |
---|---|---|---|---|---|
0 | 1 | 2017-09-18 06:00:00 | 2017-09-18 05:00:00 | 99440.0 | 141401.750000 |
1 | 1 | 2017-09-18 07:00:00 | 2017-09-18 05:00:00 | 97655.0 | 152474.250000 |
2 | 1 | 2017-09-18 08:00:00 | 2017-09-18 05:00:00 | 97655.0 | 152482.796875 |
… | … | … | … | … | … |
87 | 1 | 2017-09-21 21:00:00 | 2017-09-20 17:00:00 | 103080.0 | 105600.390625 |
88 | 1 | 2017-09-21 22:00:00 | 2017-09-20 17:00:00 | 95155.0 | 96717.390625 |
89 | 1 | 2017-09-21 23:00:00 | 2017-09-20 17:00:00 | 80285.0 | 82608.343750 |
Model Evaluation
Now we are going to evaluate our model with the results of the predictions, we will use different types of metrics MAE, MAPE, MASE, RMSE, SMAPE to evaluate the accuracy.
unique_id | metric | SeasESOpt | |
---|---|---|---|
0 | 1 | mae | 6694.042188 |
1 | 1 | mape | 0.060392 |
2 | 1 | mase | 0.827062 |
3 | 1 | rmse | 8118.297509 |
4 | 1 | smape | 0.028961 |
Acknowledgements
We would like to thank Naren Castellon for writing this tutorial.
References
- Changquan Huang • Alla Petukhina. Springer series (2022). Applied Time Series Analysis and Forecasting with Python.
- Ivan Svetunkov. Forecasting and Analytics with the Augmented Dynamic Adaptive Model (ADAM)
- James D. Hamilton. Time Series Analysis Princeton University Press, Princeton, New Jersey, 1st Edition, 1994.
- Nixtla Parameters.
- Pandas available frequencies.
- Rob J. Hyndman and George Athanasopoulos (2018). “Forecasting principles and practice, Time series cross-validation”..
- Seasonal periods- Rob J Hyndman.