Dynamic Optimized Theta Model
Step-by-step guide on using the DynamicOptimizedTheta Model
with Statsforecast
.
Table of Contents
- Introduction
- Dynamic Optimized Theta Model (DOTM)
- Loading libraries and data
- Explore data with the plot method
- Split the data into training and testing
- Implementation of DynamicOptimizedTheta with StatsForecast
- Cross-validation
- Model evaluation
- References
Introduction
The Dynamic Optimized Theta Model (DOTM) is a forecasting technique that is used to predict future values of a time series. It is a variant of the Theta method, which combines exponential smoothing and a linear trend to forecast future values.
The DOTM extends the Theta method by introducing a dynamic optimization process that selects the optimal smoothing parameters for the exponential smoothing component and the optimal weights for the linear trend component based on the historical data. This optimization process is performed iteratively using a genetic algorithm that searches for the combination of parameters that minimizes the forecast error.
The DOTM is designed to handle time series data that exhibit non-linear and non-stationary behavior over time. It is particularly useful for forecasting time series with complex patterns such as seasonality, trend, and cyclical fluctuations.
The DOTM has several advantages over other forecasting methods. First, it is a simple and easy-to-implement method that does not require extensive statistical knowledge. Second, it is a highly adaptable method that can be customized to fit a wide range of time series data. Third, it is a robust method that can handle missing data, outliers, and other anomalies in the time series.
Overall, the Dynamic Optimized Theta Model is a powerful forecasting technique that can be used to generate accurate and reliable predictions for a wide range of time series data.
Dynamic Optimized Theta Models (DOTM)
So far, we have set and as fixed coefficients for all .
We will now consider these coefficients as dynamic functions; i.e., for
updating the state to we will only consider the prior
information when computing and . Hence, We
replace and in equations (3) and (4) of the notebook of the
optimized theta model
with and . Then, after applying the
new Eq. (4) to the new Eq. (3) and rewriting the result at time with
, we have
Then, assuming additive one-step-ahead errors and rewriting Eqs. (3) (see AutoTheta Model), (1), we obtain
for . Eqs. (2), (3), (4), (5), (6), (7) configure a state space model with parameters , and . The initialisation of the states is performed assuming . From here on, we will refer to this model as the dynamic optimised Theta model (DOTM).
An important property of the DOTM is that when , which implies that , the forecasting vector given by Eq. (3)(See OTM) will be equal to
Thus, when , the DOTM is the SES method. When , DOTM (as SES-d) acts as a extension of SES, by adding a long-term component.
The out-of-sample one-step-ahead forecasts produced by DOTM at origin are given by
for a horizon , the forecast are computed recursively using Eqs. (3), (4), (5), (6), (7), (8) by replacing the non-observed values with their expected values . The conditional variance is hard to write analytically. However, the variance and prediction intervals for can be estimated using the bootstrapping technique, where a (usually large) sample of possible values of is simulated from the estimated model.
Note that, in contrast to STheta, STM and OTM, the forecasts produced by DSTM and DOTM are not necessary linear. This is also a fundamental difference between DSTM/DOTM and SES-d: while the long-term trend () in SES-d is constant, this is not the case for DSTM/DOTM, for either the in-sample fit or the out-of-sample predictions.
Loading libraries and data
Tip
Statsforecast will be needed. To install, see instructions.
Next, we import plotting libraries and configure the plotting style.
Read Data
month | production | |
---|---|---|
0 | 1962-01-01 | 589 |
1 | 1962-02-01 | 561 |
2 | 1962-03-01 | 640 |
3 | 1962-04-01 | 656 |
4 | 1962-05-01 | 727 |
The input to StatsForecast is always a data frame in long format with three columns: unique_id, ds and y:
-
The
unique_id
(string, int or category) represents an identifier for the series. -
The
ds
(datestamp) column should be of a format expected by Pandas, ideally YYYY-MM-DD for a date or YYYY-MM-DD HH:MM:SS for a timestamp. -
The
y
(numeric) represents the measurement we wish to forecast.
ds | y | unique_id | |
---|---|---|---|
0 | 1962-01-01 | 589 | 1 |
1 | 1962-02-01 | 561 | 1 |
2 | 1962-03-01 | 640 | 1 |
3 | 1962-04-01 | 656 | 1 |
4 | 1962-05-01 | 727 | 1 |
We can see that our time variable (ds)
is in an object format, we need
to convert to a date format
Explore Data with the plot method
Plot some series using the plot method from the StatsForecast class. This method prints a random series from the dataset and is useful for basic EDA.
Autocorrelation plots
Decomposition of the time series
How to decompose a time series and why?
In time series analysis to forecast new values, it is very important to know past data. More formally, we can say that it is very important to know the patterns that values follow over time. There can be many reasons that cause our forecast values to fall in the wrong direction. Basically, a time series consists of four components. The variation of those components causes the change in the pattern of the time series. These components are:
- Level: This is the primary value that averages over time.
- Trend: The trend is the value that causes increasing or decreasing patterns in a time series.
- Seasonality: This is a cyclical event that occurs in a time series for a short time and causes short-term increasing or decreasing patterns in a time series.
- Residual/Noise: These are the random variations in the time series.
Combining these components over time leads to the formation of a time series. Most time series consist of level and noise/residual and trend or seasonality are optional values.
If seasonality and trend are part of the time series, then there will be effects on the forecast value. As the pattern of the forecasted time series may be different from the previous time series.
The combination of the components in time series can be of two types: * Additive * Multiplicative
Additive time series
If the components of the time series are added to make the time series. Then the time series is called the additive time series. By visualization, we can say that the time series is additive if the increasing or decreasing pattern of the time series is similar throughout the series. The mathematical function of any additive time series can be represented by:
Multiplicative time series
If the components of the time series are multiplicative together, then the time series is called a multiplicative time series. For visualization, if the time series is having exponential growth or decline with time, then the time series can be considered as the multiplicative time series. The mathematical function of the multiplicative time series can be represented as.
Additive
Multiplicative
Split the data into training and testing
Let’s divide our data into sets
- Data to train our
Dynamic Optimized Theta Model(DOTM)
. - Data to test our model
For the test data we will use the last 12 months to test and evaluate the performance of our model.
Now let’s plot the training data and the test data.
Implementation of DynamicOptimizedTheta with StatsForecast
Load libraries
Instantiating Model
Import and instantiate the models. Setting the argument is sometimes
tricky. This article on Seasonal
periods by the
master, Rob Hyndmann, can be useful for season_length
.
We fit the models by instantiating a new StatsForecast object with the following parameters:
models: a list of models. Select the models you want from models and import them.
-
freq:
a string indicating the frequency of the data. (See pandas’ available frequencies.) -
n_jobs:
n_jobs: int, number of jobs used in the parallel processing, use -1 for all cores. -
fallback_model:
a model to be used if a model fails.
Any settings are passed into the constructor. Then you call its fit method and pass in the historical data frame.
Fit the Model
Let’s see the results of our Dynamic Optimized Theta Model
. We can
observe it with the following instruction:
Let us now visualize the residuals of our models.
As we can see, the result obtained above has an output in a dictionary,
to extract each element from the dictionary we are going to use the
.get()
function to extract the element and then we are going to save
it in a pd.DataFrame()
.
residual Model | |
---|---|
0 | -18.247131 |
1 | -88.625732 |
2 | 2.864929 |
… | … |
153 | -59.747070 |
154 | -91.901550 |
155 | -43.503296 |
Forecast Method
If you want to gain speed in productive settings where you have multiple
series or models we recommend using the
StatsForecast.forecast
method instead of .fit
and .predict
.
The main difference is that the .forecast
doest not store the fitted
values and is highly scalable in distributed environments.
The forecast method takes two arguments: forecasts next h
(horizon)
and level
.
-
h (int):
represents the forecast h steps into the future. In this case, 12 months ahead. -
level (list of floats):
this optional parameter is used for probabilistic forecasting. Set the level (or confidence percentile) of your prediction interval. For example,level=[90]
means that the model expects the real value to be inside that interval 90% of the times.
The forecast object here is a new data frame that includes a column with the name of the model and the y hat values, as well as columns for the uncertainty intervals. Depending on your computer, this step should take around 1min.
unique_id | ds | DynamicOptimizedTheta | |
---|---|---|---|
0 | 1 | 1975-01-01 | 839.259705 |
1 | 1 | 1975-02-01 | 801.399170 |
2 | 1 | 1975-03-01 | 895.189148 |
… | … | … | … |
9 | 1 | 1975-10-01 | 821.271179 |
10 | 1 | 1975-11-01 | 792.530457 |
11 | 1 | 1975-12-01 | 829.854492 |
unique_id | ds | y | DynamicOptimizedTheta | |
---|---|---|---|---|
0 | 1 | 1962-01-01 | 589.0 | 607.247131 |
1 | 1 | 1962-02-01 | 561.0 | 649.625732 |
2 | 1 | 1962-03-01 | 640.0 | 637.135071 |
3 | 1 | 1962-04-01 | 656.0 | 609.225830 |
4 | 1 | 1962-05-01 | 727.0 | 604.995300 |
Adding 95% confidence interval with the forecast method
unique_id | ds | DynamicOptimizedTheta | DynamicOptimizedTheta-lo-95 | DynamicOptimizedTheta-hi-95 | |
---|---|---|---|---|---|
0 | 1 | 1975-01-01 | 839.259705 | 741.963501 | 955.137634 |
1 | 1 | 1975-02-01 | 801.399170 | 641.886292 | 946.029114 |
2 | 1 | 1975-03-01 | 895.189148 | 707.210754 | 1066.337280 |
… | … | … | … | … | … |
9 | 1 | 1975-10-01 | 821.271179 | 546.113586 | 1088.162842 |
10 | 1 | 1975-11-01 | 792.530457 | 494.658173 | 1037.432129 |
11 | 1 | 1975-12-01 | 829.854492 | 519.697021 | 1108.182007 |
Predict method with confidence interval
To generate forecasts use the predict method.
The predict method takes two arguments: forecasts the next h
(for
horizon) and level
.
-
h (int):
represents the forecast h steps into the future. In this case, 12 months ahead. -
level (list of floats):
this optional parameter is used for probabilistic forecasting. Set the level (or confidence percentile) of your prediction interval. For example,level=[95]
means that the model expects the real value to be inside that interval 95% of the times.
The forecast object here is a new data frame that includes a column with the name of the model and the y hat values, as well as columns for the uncertainty intervals.
This step should take less than 1 second.
unique_id | ds | DynamicOptimizedTheta | |
---|---|---|---|
0 | 1 | 1975-01-01 | 839.259705 |
1 | 1 | 1975-02-01 | 801.399170 |
2 | 1 | 1975-03-01 | 895.189148 |
… | … | … | … |
9 | 1 | 1975-10-01 | 821.271179 |
10 | 1 | 1975-11-01 | 792.530457 |
11 | 1 | 1975-12-01 | 829.854492 |
unique_id | ds | DynamicOptimizedTheta | DynamicOptimizedTheta-lo-80 | DynamicOptimizedTheta-hi-80 | DynamicOptimizedTheta-lo-95 | DynamicOptimizedTheta-hi-95 | |
---|---|---|---|---|---|---|---|
0 | 1 | 1975-01-01 | 839.259705 | 766.150513 | 928.015259 | 741.963501 | 955.137634 |
1 | 1 | 1975-02-01 | 801.399170 | 702.992554 | 899.872864 | 641.886292 | 946.029114 |
2 | 1 | 1975-03-01 | 895.189148 | 760.141479 | 1008.321960 | 707.210754 | 1066.337280 |
… | … | … | … | … | … | … | … |
9 | 1 | 1975-10-01 | 821.271179 | 617.415405 | 996.678406 | 546.113586 | 1088.162842 |
10 | 1 | 1975-11-01 | 792.530457 | 568.329285 | 975.049255 | 494.658173 | 1037.432129 |
11 | 1 | 1975-12-01 | 829.854492 | 598.125183 | 1035.452637 | 519.697021 | 1108.182007 |
Cross-validation
In previous steps, we’ve taken our historical data to predict the future. However, to asses its accuracy we would also like to know how the model would have performed in the past. To assess the accuracy and robustness of your models on your data perform Cross-Validation.
With time series data, Cross Validation is done by defining a sliding window across the historical data and predicting the period following it. This form of cross-validation allows us to arrive at a better estimation of our model’s predictive abilities across a wider range of temporal instances while also keeping the data in the training set contiguous as is required by our models.
The following graph depicts such a Cross Validation Strategy:
Perform time series cross-validation
Cross-validation of time series models is considered a best practice but most implementations are very slow. The statsforecast library implements cross-validation as a distributed operation, making the process less time-consuming to perform. If you have big datasets you can also perform Cross Validation in a distributed cluster using Ray, Dask or Spark.
In this case, we want to evaluate the performance of each model for the
last 5 months (n_windows=5)
, forecasting every second months
(step_size=12)
. Depending on your computer, this step should take
around 1 min.
The cross_validation method from the StatsForecast class takes the following arguments.
-
df:
training data frame -
h (int):
represents h steps into the future that are being forecasted. In this case, 12 months ahead. -
step_size (int):
step size between each window. In other words: how often do you want to run the forecasting processes. -
n_windows(int):
number of windows used for cross validation. In other words: what number of forecasting processes in the past do you want to evaluate.
The crossvaldation_df object is a new data frame that includes the following columns:
unique_id:
series identifierds:
datestamp or temporal indexcutoff:
the last datestamp or temporal index for the n_windows.y:
true value"model":
columns with the model’s name and fitted value.
unique_id | ds | cutoff | y | DynamicOptimizedTheta | |
---|---|---|---|---|---|
0 | 1 | 1972-01-01 | 1971-12-01 | 826.0 | 828.692017 |
1 | 1 | 1972-02-01 | 1971-12-01 | 799.0 | 792.444092 |
2 | 1 | 1972-03-01 | 1971-12-01 | 890.0 | 883.122620 |
… | … | … | … | … | … |
33 | 1 | 1974-10-01 | 1973-12-01 | 812.0 | 810.342834 |
34 | 1 | 1974-11-01 | 1973-12-01 | 773.0 | 781.845703 |
35 | 1 | 1974-12-01 | 1973-12-01 | 813.0 | 818.855103 |
Model Evaluation
Now we are going to evaluate our model with the results of the predictions, we will use different types of metrics MAE, MAPE, MASE, RMSE, SMAPE to evaluate the accuracy.
unique_id | metric | DynamicOptimizedTheta | |
---|---|---|---|
0 | 1 | mae | 6.861954 |
1 | 1 | mape | 0.008045 |
2 | 1 | mase | 0.308595 |
3 | 1 | rmse | 8.647457 |
4 | 1 | smape | 0.004010 |
Acknowledgements
We would like to thank Naren Castellon for writing this tutorial.
References
- Kostas I. Nikolopoulos, Dimitrios D. Thomakos. Forecasting with the Theta Method-Theory and Applications. 2019 John Wiley & Sons Ltd.
- Jose A. Fiorucci, Tiago R. Pellegrini, Francisco Louzada, Fotios Petropoulos, Anne B. Koehler (2016). “Models for optimising the theta method and their relationship to state space models”. International Journal of Forecasting.
- Nixtla Parameters.
- Pandas available frequencies.
- Rob J. Hyndman and George Athanasopoulos (2018). “Forecasting principles and practice, Time series cross-validation”..
- Seasonal periods- Rob J Hyndman.