Exponential smoothing has been one of the most popular forecasting
methods used to support various decisions in organizations, in
activities such as inventory management, scheduling, revenue management,
and other areas. Although its relative simplicity and transparency have
made it very attractive for research and practice, identifying the
underlying trend remains challenging with significant impact on the
resulting accuracy. This has resulted in the development of various
modifications of trend models, introducing a model selection problem.
With the aim of addressing this problem, we propose the complex
exponential smoothing (CES), based on the theory of functions of complex
variables. The basic CES approach involves only two parameters and does
not require a model selection procedure. Despite these simplifications,
CES proves to be competitive with, or even superior to existing methods.
We show that CES has several advantages over conventional exponential
smoothing models: it can model and forecast both stationary and
non-stationary processes, and CES can capture both level and trend
cases, as defined in the conventional exponential smoothing
classification. CES is evaluated on several forecasting competition
datasets, demonstrating better performance than established benchmarks.
We conclude that CES has desirable features for time series modeling and
opens new promising avenues for research.
Using the complex valued representation of time series, we propose the
CES in analogy to the conventional exponential smoothing methods.
Consider the simple exponential smoothing method:y^t=αyt−1+(1−α)y^t−1(1)where α is the smoothing parameter and y^t is the
estimated value of series. The same method can be represented as a
weighted average of previous actual observations if we substitute
y^t−1 by the formula (1) with an index instead of (Brown,
1956):y^t=αj=1∑t−1(1−α)j−1yt−j(2)The idea of this representation is to demonstrate how the weights
α(1−α)j−1 are distributed over time in our
sample. If the smoothing parameter α∈(0,1) then
the weights decline exponentially with the increase of . If it lies in
the so called “admissible bounds” (Brenner et al., 1968), that is
α∈(0,2) then the weights decline in oscillating
manner. Both traditional and admissible bounds have been used
efficiently in practice and in academic literature (for application of
the latter see for example Gardner & Diaz-Saiz, 2008; Snyder et al.,
2017). However, in real life the distribution of weights can be more
complex, with harmonic rather than exponential decline, meaning that
some of the past observation might have more importance than the recent
ones. In order to implement such distribution of weights, we build upon
(2) and introduce complex dynamic interactions by substituting the real
variables with the complex ones in (2). First, we substitute yt−j
by the complex variable yt−j+iet−j, where et is the
error term of the model and i is the imaginary unit (which satisfies
the equation i2=−1). The idea behind this is to have the impact of
both actual values and the error on each observation in the past on the
final forecast. Second, we substitute α with a complex variable
α0+iα1 and 1 by 1+i to introduce the harmonically
declining weights. Depending on the values of the complex smoothing
parameter, the weights distribution will exhibit a variety of
trajectories over time, including exponential, oscillating, and
harmonic. Finally, the result of multiplication of two complex numbers
will be another complex number, so we substitute y^t−j with
y^t−j+ie^t−j, where e^t−j is the proxy
for the error term. The CES obtained as a result of this can be written
as:y^t+ie^t=(α0+iα1)j=1∑t−1(1+i−(α0+iα1))j−1(yt−j+iet−j)(3)Having arrived to the model with harmonically distributed weights, we
can now move to the shorter form by substitutingy^t−1+ie^t−1=(α0+iα1)j=2∑t−1(1+i−(α0+iα1))j−1(yt−j+iet−j)in (3) to get:y^t+ie^t=(α0+iα1)(yt−1+iet−1)+(1−α0+i−iα1)(y^t−1+ie^t−1).(4)Note that e^t is not interesting for the time series analysis
and forecasting purposes, but is used as a vessel containing the
information about the previous errors of the method. Having the complex
variables instead of the real ones in (4), allows taking the
exponentially weighted values of both actuals and the forecast errors.
By changing the value of α0+iα1 , we can regulate what
proportions of the actual and the forecast error should be carried out
to the future in order to produce forecasts.Representing the complex-valued function as a system of two real-valued
functions leads to:y^t=(α0yt−1+(1−α0)y^t−1)−(α1et−1+(1−α1)e^t−1)e^t=(α1yt−1+(1−α1)y^t−1)+(α0et−1+(1−α0)e^t−1).(5)CES introduces an interaction between the real and imaginary parts, and
the equations in (6) are connected via the previous values of each
other, causing interactions over time, defined by complex smoothing
parameter value.But the method itself is restrictive and does not allow easily producing
prediction intervals and deriving the likelihood function. It is also
important to understand what sort of statistical model underlies CES.
This model can be written in the following state space form:yt=lt−1+ϵtlt=lt−1−(1−α1)ct−1+(α0−α1)ϵtct=lt−1+(1−α0)ct−1+(α0+α1)ϵt,(6)where ϵt is the white noise error term, lt is the level
component and ct is the nonlinear trend component at observation .
Observe that dependencies in time series have an interactive structure
and no explicit trend component is present in the time series as this
model does not need to artificially break the series into level and
trend, as ETS does. Although we call the ct component as “nonlinear
trend,” it does not correspond to the conventional trend component,
because it contains the information of both previous ct−1 and the
level lt−1. Also, note that we use ϵt instead of
et in (6), which means that the CES has (6) as an underlying
statistical model only when there is no misspecification error. In the
case of the estimation of this model, the ϵt will be
substituted by et, which will then lead us to the original
formulation (4).This idea allows rewriting (6) in a shorter more generic way, resembling
the general single source of error (SSOE) state space framework:yt=w′vt−1+ϵtvt=Fvt−1+gϵt,(7)where
vt=(ltct)
is the state vectorF=(11−(1−α1)1−α0)
is the transition matrixg=(α0−α1α0+α1)
is the persistence vector andw=(10) is the
measurement vector.The state space form (7) permits extending CES in a similar ways to ETS
to include additional states for seasonality or exogenous variables. The
main difference between model (7) and the conventional ETS is that the
transition matrix in (7) includes smoothing parameters which is not a
standard feature of ETS models. Furthermore persistence vector includes
the interaction of complex smoothing parameters, rather than smoothing
parameters themselves.The error term in (6) is additive, so the likelihood function for CES is
trivial and is similar to the one in the additive exponential smoothing
models (Hyndman et al., 2008, p. 68):L(g,v0,σ2∣Y)=(σ2π1)Texp(−21t=1∑T(σϵt)2),(8)where v0 is the vector of initial states, σ2 is
the variance of the error term and Y is the vector of all the
in-sample observations.
In order to understand the properties of CES, we need to study its
stationarity and stability conditions. The former holds for general
exponential smoothing in the state space form (8) when all the
eigenvalues of lie inside the unit circle (Hyndman et al., 2008, p. 38).
CES can be either stationary or not, depending on the complex smoothing
parameter value, in contrast to ETS models that are always
non-stationary. Calculating eigenvalues of for CES gives the following
roots:λ=22−α0±α02+4α1−4.(9)If the absolute values of both roots are less than 1 then the estimated
CES is stationary.When α1>1 one of the eigenvalues will always be greater than
one. In this case both eigenvalues will be real numbers and CES produces
a non-stationary trajectory. When α1=1 CES becomes equivalent
to ETS(A,N,N). Finally, the model becomes stationary when:⎩⎨⎧α1<5−2α0α1<1α1>1−α0(10)Note that we are not restricting CES with the conditions (10), we merely
show, how the model will behave depending on the value of the complex
smoothing parameter. This property of CES means that it is able to model
either stationary or non-stationary processes, without the need to
switch between them. The property of CES for each separate time series
depends on the value of the smoothing parameters.The other important property that arises from (7) is the stability
condition for CES. With ϵt=yt−lt−1 the following is
obtained:yt(ltct)=lt−1+ϵt=(1−α0+α11−α0−α1−(1−α1)1−α0)(lt−1ct−1)+(α0−α1α1+α0)yt.(11)The matrix
D=(1−α0+α11−α0−α1−(1−α1)1−α0)
is called the discount matrix and can be written in the general form:D=F−gw′.(12)The model is said to be stable if all the eigenvalues of (12) lie inside
the unit circle. This is more important condition than the stationarity
for the model, because it ensures that the complex weights decline over
time and that the older observations have smaller weights than the new
ones, which is one of the main features of the conventional ETS models.
The eigenvalues are given by the following formula:λ=22−2α0+α1±8α1+4α0−4α0α1−4−3α12.(13)CES will be stable when the following system of inequalities is
satisfied:⎩⎨⎧(α0−2.5)2+α12>1.25(α0−0.5)2+(α1−1)2>0.25(α0−1.5)2+(α1−0.5)2<1.5.(14)Both the stationarity and stability regions are shown in Figure
1.
The stationarity region (10) corresponds to the triangle. All the
combinations of smoothing parameters lying below the curve in the
triangle will produce the stationary harmonic trajectories, while the
rest lead to the exponential trajectories. The stability condition (14)
corresponds to the dark region. The stability region intersects the
stationarity region, but in general stable CES can produce both
stationary and non-stationary forecasts
The conditional mean of CES for h steps ahead with known lt and
ct can be calculated using the state space model (6):E(yt+h∣vt)=w′Fh−1vt,(15)whereE(yt+h∣vt)=y^t+hwhile F and w re the matrices from (7).The forecasting trajectories of (15) will differ depending on the values
of lt,ct, and the complex smoothing parameter. The analysis of
stationarity condition shows that there are several types of forecasting
trajectories of CES depending on the particular value of the complex
smoothing parameter:
When α1=1 all the values of forecast will be equal to the
last obtained forecast, which corresponds to a flat line. This
trajectory is shown in Figure
2A.
When α1>1 the model produces trajectory with exponential
growth which is shown in Figure
2B.
When 44−α02<α1<1 trajectory becomes
stationary and CES produces exponential decline shown in Figure
2C.
When 1−α0<α1<44−α02 trajectory
becomes harmonic and will converge to zero Figure
2D.
Finally, when 0<α1<1−α0 the diverging harmonic
trajectory is produced, the model becomes non-stationary. This
trajectory is of no use in forecasting, that is why we do not show
it on graphs.
Using (7) the conditional variance of CES for h steps ahead with known
lt and ct can be calculated similarly to the pure additive ETS
models (Hyndman et al., 2008, p. 96).
How to decompose a time series and why?In time series analysis to forecast new values, it is very important to
know past data. More formally, we can say that it is very important to
know the patterns that values follow over time. There can be many
reasons that cause our forecast values to fall in the wrong direction.
Basically, a time series consists of four components. The variation of
those components causes the change in the pattern of the time series.
These components are:
Level: This is the primary value that averages over time.
Trend: The trend is the value that causes increasing or
decreasing patterns in a time series.
Seasonality: This is a cyclical event that occurs in a time
series for a short time and causes short-term increasing or
decreasing patterns in a time series.
Residual/Noise: These are the random variations in the time
series.
Combining these components over time leads to the formation of a time
series. Most time series consist of level and noise/residual and trend
or seasonality are optional values.If seasonality and trend are part of the time series, then there will be
effects on the forecast value. As the pattern of the forecasted time
series may be different from the previous time series.The combination of the components in time series can be of two types: *
Additive * Multiplicative
If the components of the time series are added to make the time series.
Then the time series is called the additive time series. By
visualization, we can say that the time series is additive if the
increasing or decreasing pattern of the time series is similar
throughout the series. The mathematical function of any additive time
series can be represented by:
y(t)=level+Trend+seasonality+noise
If the components of the time series are multiplicative together, then
the time series is called a multiplicative time series. For
visualization, if the time series is having exponential growth or
decline with time, then the time series can be considered as the
multiplicative time series. The mathematical function of the
multiplicative time series can be represented as.y(t)=Level∗Trend∗seasonality∗Noise
from statsmodels.tsa.seasonal import seasonal_decomposea = seasonal_decompose(df["y"], model = "add", period=1)a.plot();
Import and instantiate the models. Setting the argument is sometimes
tricky. This article on Seasonal
periods) by the
master, Rob Hyndmann, can be useful season_lengthNoteAutomatically selects the best Complex Exponential Smoothing model
using an information criterion. Default is Akaike Information Criterion
(AICc), while particular models are estimated using maximum likelihood.
The state-space equations can be determined based on their S simple,
P parial, Z optimized or N ommited components. The model string
parameter defines the kind of CES model: N for simple CES (withous
seasonality), S for simple seasonality (lagged CES), P for partial
seasonality (without complex part), F for full seasonality (lagged CES
with real and complex seasonal parts).If the component is selected as Z, it operates as a placeholder to ask
the AutoCES model to figure out the best parameter.
season_length = 1 # year datahorizon = len(test) # number of predictions# We call the model that we are going to usemodels = [AutoCES(season_length=season_length)]
We fit the models by instantiating a new StatsForecast object with the
following parameters:models: a list of models. Select the models you want from models and
import them.
Let us now visualize the residuals of our models.As we can see, the result obtained above has an output in a dictionary,
to extract each element from the dictionary we are going to use the
.get() function to extract the element and then we are going to save
it in a pd.DataFrame().
If you want to gain speed in productive settings where you have multiple
series or models we recommend using the StatsForecast.forecast method
instead of .fit and .predict.The main difference is that the .forecast doest not store the fitted
values and is highly scalable in distributed environments.The forecast method takes two arguments: forecasts next h (horizon)
and level.
h (int): represents the forecast h steps into the future. In this
case, 12 months ahead.
level (list of floats): this optional parameter is used for
probabilistic forecasting. Set the level (or confidence percentile)
of your prediction interval. For example, level=[90] means that
the model expects the real value to be inside that interval 90% of
the times.
The forecast object here is a new data frame that includes a column with
the name of the model and the y hat values, as well as columns for the
uncertainty intervals. Depending on your computer, this step should take
around 1min. (If you want to speed things up to a couple of seconds,
remove the AutoModels like ARIMA and Theta)
To generate forecasts use the predict method.The predict method takes two arguments: forecasts the next h (for
horizon) and level.
h (int): represents the forecast h steps into the future. In this
case, 12 months ahead.
level (list of floats): this optional parameter is used for
probabilistic forecasting. Set the level (or confidence percentile)
of your prediction interval. For example, level=[95] means that
the model expects the real value to be inside that interval 95% of
the times.
The forecast object here is a new data frame that includes a column with
the name of the model and the y hat values, as well as columns for the
uncertainty intervals.This step should take less than 1 second.
Now let’s visualize the result of our forecast and the historical data
of our time series, also let’s draw the confidence interval that we have
obtained when making the prediction with 95% confidence.
In previous steps, we’ve taken our historical data to predict the
future. However, to asses its accuracy we would also like to know how
the model would have performed in the past. To assess the accuracy and
robustness of your models on your data perform Cross-Validation.With time series data, Cross Validation is done by defining a sliding
window across the historical data and predicting the period following
it. This form of cross-validation allows us to arrive at a better
estimation of our model’s predictive abilities across a wider range of
temporal instances while also keeping the data in the training set
contiguous as is required by our models.The following graph depicts such a Cross Validation Strategy:
Cross-validation of time series models is considered a best practice but
most implementations are very slow. The statsforecast library implements
cross-validation as a distributed operation, making the process less
time-consuming to perform. If you have big datasets you can also perform
Cross Validation in a distributed cluster using Ray, Dask or Spark.In this case, we want to evaluate the performance of each model for the
last 5 months (n_windows=5), forecasting every second months
(step_size=12). Depending on your computer, this step should take
around 1 min.The cross_validation method from the StatsForecast class takes the
following arguments.
df: training data frame
h (int): represents h steps into the future that are being
forecasted. In this case, 12 months ahead.
step_size (int): step size between each window. In other words:
how often do you want to run the forecasting processes.
n_windows(int): number of windows used for cross validation. In
other words: what number of forecasting processes in the past do you
want to evaluate.
Now we are going to evaluate our model with the results of the
predictions, we will use different types of metrics MAE, MAPE, MASE,
RMSE, SMAPE to evaluate the accuracy.
from functools import partialimport utilsforecast.losses as uflfrom utilsforecast.evaluation import evaluate