Table of Contents

Introduction

The Theta method (Assimakopoulos & Nikolopoulos, 2000, hereafter A&N) is applied to non-seasonal or deseasonalised time series, where the deseasonalisation is usually performed via the multiplicative classical decomposition. The method decomposes the original time series into two new lines through the so-called theta coefficients, denoted by θ1{\theta}_1 and θ2{\theta}_2 for θ1,θ2R{\theta}_1, {\theta}_2 \in \mathbb{R}, which are applied to the second difference of the data. The second differences are reduced when θ<1{\theta}<1, resulting in a better approximation of the long-term behaviour of the series (Assimakopoulos, 1995). If θ{\theta} is equal to zero, the new line is a straight line. When θ>1{\theta}>1 the local curvatures are increased, magnifying the short-term movements of the time series (A&N). The new lines produced are called theta lines, denoted here by Z(θ1)\text{Z}(\theta_1) and Z(θ2)\text{Z}(\theta_2). These lines have the same mean value and slope as the original data, but the local curvatures are either filtered out or enhanced, depending on the value of the θ\theta coefficient.

In other words, the decomposition process has the advantage of exploiting information in the data that usually cannot be captured and modelled completely through the extrapolation of the original time series. The theta lines can be regarded as new time series and are extrapolated separately using an appropriate forecasting method. Once the extrapolation of each theta line has been completed, recomposition takes place through a combination scheme in order to calculate the point forecasts of the original time series. Combining has long been considered as a useful practice in the forecasting literature (for example, Clemen, 1989, Makridakis and Winkler, 1983, Petropoulos et al., 2014), and therefore its application to the Theta method is expected to result in more accurate and robust forecasts.

The Theta method is quite versatile in terms of choosing the number of theta lines, the theta coefficients and the extrapolation methods, and combining these to obtain robust forecasts. However, A&N proposed a simplified version involving the use of only two theta lines with prefixed θ\theta coefficients that are extrapolated over time using a linear regression (LR) model for the theta line with θ1=0{\theta}_1 =0 and simple exponential smoothing (SES) for the theta line with θ2=2{\theta}_2 =2. The final forecasts are produced by combining the forecasts of the two theta lines with equal weights.

The performance of the Theta method has also been confirmed by other empirical studies (for example Nikolopoulos et al., 2012, Petropoulos and Nikolopoulos, 2013). Moreover, Hyndman and Billah (2003), hereafter H&B, showed that the simple exponential smoothing with drift model (SES-d) is a statistical model for the simplified version of the Theta method. More recently, Thomakos and Nikolopoulos (2014) provided additional theoretical insights, while Thomakos and Nikolopoulos (2015) derived new theoretical formulations for the application of the method to multivariate time series, and investigated the conditions under which the bivariate Theta method is expected to forecast better than the univariate one. Despite these advances, we believe that the Theta method deserves more attention from the forecasting community, given its simplicity and superior forecasting performance.

One key aspect of the Theta method is that, by definition, it is dynamic. One can choose different theta lines and combine the produced forecasts using either equal or unequal weights. However, AN limit this important property by fixing the theta coefficients to have predefined values.

Standard Theta Model

Assimakopoulos and Nikolopoulo for standard theta model proposed the Theta line as the solution of the equation

D2ζt(θ)=θD2Yt,t=1,,T \begin{equation} D^2 \zeta_t(\theta) = \theta D^2 Y_t, t = 1,\cdots,T \tag 1 \end{equation}

where Y1,,YTY_1, \cdots , Y_T represent the original time series data and DXt=(XtXt1)DX_t = (X_t − X_{t−1}). The initial values ζ1\zeta_1 and ζ2\zeta_2 are obtained by minimizing i=1T[Ytζt(θ)]2\sum_{i=1}^{T} [Y_t - \zeta_t (\theta) ]^2. However, the analytical solution of (1) is given by

ζt(θ)=θYt+(1θ)(AT+BTt), t=1,,T, \begin{equation} \zeta_t(\theta)=\theta Y_t +(1−\theta)(A_T +B_T t),\ t=1, \cdots, T, \tag 2 \end{equation}

where ATA_T and BTB_T are the minimum square coefficients of a simple linear regression over Y1,,YTY_1, \cdots,Y_T against 1,,T1, \cdots , T which are only dependent on the original data and given as follow

AT=1Ti=1TYtT+12BT \begin{equation} A_T=\frac{1}{T} \sum_{i=1}^{T} Y_t - \frac{T+1}{2} B_T \tag 3 \end{equation}
BT=6T21(2Tt=1TtYtT+1Tt=1TYt \begin{equation} B_T=\frac{6}{T^2 - 1} (\frac{2}{T} \sum_{t=1}^{T} tY_t - \frac{T+1}{T} \sum_{t=1}^{T} Y_t \tag 4 \end{equation}

Theta lines can be understood as functions of the linear regression model directly applied to the data from this perspective. Indeed, the Theta method’s projections for h steps ahead are an ad hoc combination (50 percent - 50 percent) of the linear extrapolations of ζ(0)\zeta(0) and ζ(2)\zeta(2).

  • When θ<1\theta < 1 is applied to the second differences of the data, the decomposition process is defined by a theta coefficient, which reduces the second differences and improves the approximation of series behavior.

  • If θ=0\theta = 0, the deconstructed line is turned into a constant straight line. (see Fig)

  • If θ>1\theta > 1 then the short term movements of the analyzed series show more local curvatures (see fig)

We will refer to the above setup as the standard Theta method. The steps for building the theta method are as follows:

  1. Deseasonalisation: Firstly, the time series data is tested for statistically significant seasonal behaviour. A time series is seasonal if

ρm>q1α21+2i=1m1ρi2T|\rho_m| > q_{1- \frac{\alpha}{2} } \sqrt{\frac{1+2 \sum_{i=1}^{m-1} \rho_{i}^{2} }{T} }

where ρk denotes the lag kk autocorrelation function, mm is the number of the periods within a seasonal cycle (for example, 12 for monthly data), TT is the sample size, qq is the quantile function of the standard normal distribution, and (1a)%(1 − a)\% is the confidence level. Assimakopoulos and Nikolopoulo [Standar Theta model] opted for a 90% confidence level. If the time series is identified as seasonal, then it is deseasonalised via the classical decomposition method, assuming the seasonal component to have a multiplicative relationship.

  1. Decomposition: The second step consits for the decomposition of the seasonally adjusted time series into two Theta lines, the linear regression line ζ(0)\zeta(0) and the theta line ζ(2)\zeta(2).

  2. Extrapolation: ζ(2)\zeta(2) is extrapolated using simple exponential smoothing (SES), while ζ(0)\zeta(0) is extrapolated as a normal linear regression line.

  3. Combination: the final forecast is a combination of the forecasts of the two θ\theta lines using equal weights.

  4. Reseasonalisation: In the presence of seasonality in first step, then the final forecasts are multiplied by the respective seasonal indices.

Loading libraries and data

Tip

Statsforecast will be needed. To install, see instructions.

Next, we import plotting libraries and configure the plotting style.

import matplotlib.pyplot as plt
import seaborn as sns
from statsmodels.graphics.tsaplots import plot_acf, plot_pacf
plt.style.use('grayscale') # fivethirtyeight  grayscale  classic
plt.rcParams['lines.linewidth'] = 1.5
dark_style = {
    'figure.facecolor': '#008080',  # #212946
    'axes.facecolor': '#008080',
    'savefig.facecolor': '#008080',
    'axes.grid': True,
    'axes.grid.which': 'both',
    'axes.spines.left': False,
    'axes.spines.right': False,
    'axes.spines.top': False,
    'axes.spines.bottom': False,
    'grid.color': '#000000',  #2A3459
    'grid.linewidth': '1',
    'text.color': '0.9',
    'axes.labelcolor': '0.9',
    'xtick.color': '0.9',
    'ytick.color': '0.9',
    'font.size': 12 }
plt.rcParams.update(dark_style)


from pylab import rcParams
rcParams['figure.figsize'] = (18,7)

Read Data

import pandas as pd
df = pd.read_csv("https://raw.githubusercontent.com/Naren8520/Serie-de-tiempo-con-Machine-Learning/main/Data/milk_production.csv", usecols=[1,2])
df.head()
monthproduction
01962-01-01589
11962-02-01561
21962-03-01640
31962-04-01656
41962-05-01727

The input to StatsForecast is always a data frame in long format with three columns: unique_id, ds and y:

  • The unique_id (string, int or category) represents an identifier for the series.

  • The ds (datestamp) column should be of a format expected by Pandas, ideally YYYY-MM-DD for a date or YYYY-MM-DD HH:MM:SS for a timestamp.

  • The y (numeric) represents the measurement we wish to forecast.

df["unique_id"]="1"
df.columns=["ds", "y", "unique_id"]
df.head()
dsyunique_id
01962-01-015891
11962-02-015611
21962-03-016401
31962-04-016561
41962-05-017271
print(df.dtypes)
ds           object
y             int64
unique_id    object
dtype: object

We can see that our time variable (ds) is in an object format, we need to convert to a date format

df["ds"] = pd.to_datetime(df["ds"])

Explore Data with the plot method

Plot some series using the plot method from the StatsForecast class. This method prints a random series from the dataset and is useful for basic EDA.

from statsforecast import StatsForecast

StatsForecast.plot(df, engine="matplotlib")

Autocorrelation plots

fig, axs = plt.subplots(nrows=1, ncols=2)

plot_acf(df["y"],  lags=30, ax=axs[0],color="fuchsia")
axs[0].set_title("Autocorrelation");

plot_pacf(df["y"],  lags=30, ax=axs[1],color="lime")
axs[1].set_title('Partial Autocorrelation')

plt.show();

Decomposition of the time series

How to decompose a time series and why?

In time series analysis to forecast new values, it is very important to know past data. More formally, we can say that it is very important to know the patterns that values follow over time. There can be many reasons that cause our forecast values to fall in the wrong direction. Basically, a time series consists of four components. The variation of those components causes the change in the pattern of the time series. These components are:

  • Level: This is the primary value that averages over time.
  • Trend: The trend is the value that causes increasing or decreasing patterns in a time series.
  • Seasonality: This is a cyclical event that occurs in a time series for a short time and causes short-term increasing or decreasing patterns in a time series.
  • Residual/Noise: These are the random variations in the time series.

Combining these components over time leads to the formation of a time series. Most time series consist of level and noise/residual and trend or seasonality are optional values.

If seasonality and trend are part of the time series, then there will be effects on the forecast value. As the pattern of the forecasted time series may be different from the previous time series.

The combination of the components in time series can be of two types: * Additive * Multiplicative

Additive time series

If the components of the time series are added to make the time series. Then the time series is called the additive time series. By visualization, we can say that the time series is additive if the increasing or decreasing pattern of the time series is similar throughout the series. The mathematical function of any additive time series can be represented by: y(t)=level+Trend+seasonality+noisey(t) = level + Trend + seasonality + noise

Multiplicative time series

If the components of the time series are multiplicative together, then the time series is called a multiplicative time series. For visualization, if the time series is having exponential growth or decline with time, then the time series can be considered as the multiplicative time series. The mathematical function of the multiplicative time series can be represented as.

y(t)=LevelTrendseasonalityNoisey(t) = Level * Trend * seasonality * Noise

Additive

from statsmodels.tsa.seasonal import seasonal_decompose 
a = seasonal_decompose(df["y"], model = "additive", period=12)
a.plot();

Multiplicative

from statsmodels.tsa.seasonal import seasonal_decompose 
a = seasonal_decompose(df["y"], model = "Multiplicative", period=12)
a.plot();

Split the data into training and testing

Let’s divide our data into sets 1. Data to train our Theta model 2. Data to test our model

For the test data we will use the last 12 months to test and evaluate the performance of our model.

train = df[df.ds<='1974-12-01'] 
test = df[df.ds>'1974-12-01']
train.shape, test.shape
((156, 3), (12, 3))

Now let’s plot the training data and the test data.

sns.lineplot(train,x="ds", y="y", label="Train", linestyle="--")
sns.lineplot(test, x="ds", y="y", label="Test")
plt.title("Monthly Milk Production");
plt.show()

Implementation of StandardTheta with StatsForecast

To also know more about the parameters of the functions of the StandardTheta Model, they are listed below. For more information, visit the documentation.

season_length : int
    Number of observations per unit of time. Ex: 24 Hourly data.
decomposition_type : str
    Sesonal decomposition type, 'multiplicative' (default) or 'additive'.
alias : str
    Custom name of the model.
prediction_intervals : Optional[ConformalIntervals]
    Information to compute conformal prediction intervals.
    By default, the model will compute the native prediction
    intervals.

Load libraries

from statsforecast import StatsForecast
from statsforecast.models import Theta

Instantiating Model

Import and instantiate the models. Setting the argument is sometimes tricky. This article on Seasonal periods by the master, Rob Hyndmann, can be useful for season_length.

season_length = 12 # Monthly data 
horizon = len(test) # number of predictions

models = [Theta(season_length=season_length, 
                decomposition_type="additive")] # multiplicative   additive

We fit the models by instantiating a new StatsForecast object with the following parameters:

models: a list of models. Select the models you want from models and import them.

  • freq: a string indicating the frequency of the data. (See panda’s available frequencies.)

  • n_jobs: n_jobs: int, number of jobs used in the parallel processing, use -1 for all cores.

  • fallback_model: a model to be used if a model fails.

Any settings are passed into the constructor. Then you call its fit method and pass in the historical data frame.

sf = StatsForecast(df=train,
                   models=models,
                   freq='MS', 
                   n_jobs=-1)

Fit Model

sf.fit()
StatsForecast(models=[Theta])

Let’s see the results of our Theta model. We can observe it with the following instruction:

result=sf.fitted_[0,0].model_
print(result.keys())
print(result['fit'])
dict_keys(['mse', 'amse', 'fit', 'residuals', 'm', 'states', 'par', 'n', 'modeltype', 'mean_y', 'decompose', 'decomposition_type', 'seas_forecast', 'fitted'])
results(x=array([225.82002697,   0.76015625]), fn=10.638733596938769, nit=19, simplex=array([[241.83142594,   0.76274414],
       [225.82002697,   0.76015625],
       [212.41789302,   0.76391602]]))

Let us now visualize the residuals of our models.

As we can see, the result obtained above has an output in a dictionary, to extract each element from the dictionary we are going to use the .get() function to extract the element and then we are going to save it in a pd.DataFrame().

residual=pd.DataFrame(result.get("residuals"), columns=["residual Model"])
residual
residual Model
0-17.596375
1-46.997192
223.093933
153-59.003235
154-91.150085
155-42.749451
import scipy.stats as stats

fig, axs = plt.subplots(nrows=2, ncols=2)

residual.plot(ax=axs[0,0])
axs[0,0].set_title("Residuals");

sns.distplot(residual, ax=axs[0,1]);
axs[0,1].set_title("Density plot - Residual");

stats.probplot(residual["residual Model"], dist="norm", plot=axs[1,0])
axs[1,0].set_title('Plot Q-Q')

plot_acf(residual,  lags=35, ax=axs[1,1],color="fuchsia")
axs[1,1].set_title("Autocorrelation");

plt.show();

Forecast Method

If you want to gain speed in productive settings where you have multiple series or models we recommend using the StatsForecast.forecast method instead of .fit and .predict.

The main difference is that the .forecast doest not store the fitted values and is highly scalable in distributed environments.

The forecast method takes two arguments: forecasts next h (horizon) and level.

  • h (int): represents the forecast h steps into the future. In this case, 12 months ahead.

  • level (list of floats): this optional parameter is used for probabilistic forecasting. Set the level (or confidence percentile) of your prediction interval. For example, level=[90] means that the model expects the real value to be inside that interval 90% of the times.

The forecast object here is a new data frame that includes a column with the name of the model and the y hat values, as well as columns for the uncertainty intervals. Depending on your computer, this step should take around 1min.

# Prediction
Y_hat = sf.forecast(horizon, fitted=True)
Y_hat
dsTheta
unique_id
11975-01-01838.559814
11975-02-01800.188232
11975-03-01893.472900
11975-10-01816.166931
11975-11-01786.962036
11975-12-01823.826538
values=sf.forecast_fitted_values()
values.head()
dsyTheta
unique_id
11962-01-01589.0606.596375
11962-02-01561.0607.997192
11962-03-01640.0616.906067
11962-04-01656.0608.873047
11962-05-01727.0607.395142
StatsForecast.plot(values)

Adding 95% confidence interval with the forecast method

sf.forecast(h=horizon, level=[95])
dsThetaTheta-lo-95Theta-hi-95
unique_id
11975-01-01838.559814741.324280954.365540
11975-02-01800.188232640.785645944.996887
11975-03-01893.472900705.1239011064.757324
11975-10-01816.166931539.7068481083.791870
11975-11-01786.962036487.9460751032.028931
11975-12-01823.826538512.6748661101.965942
Y_hat=Y_hat.reset_index()
Y_hat
unique_iddsTheta
011975-01-01838.559814
111975-02-01800.188232
211975-03-01893.472900
911975-10-01816.166931
1011975-11-01786.962036
1111975-12-01823.826538
# Merge the forecasts with the true values
test['unique_id'] = test['unique_id'].astype(int)
Y_hat1 = test.merge(Y_hat, how='left', on=['unique_id', 'ds'])
Y_hat1
dsyunique_idTheta
01975-01-018341838.559814
11975-02-017821800.188232
21975-03-018921893.472900
91975-10-018271816.166931
101975-11-017971786.962036
111975-12-018431823.826538
fig, ax = plt.subplots(1, 1)
plot_df = pd.concat([train, Y_hat1]).set_index('ds')
plot_df[['y', "Theta"]].plot(ax=ax, linewidth=2)
ax.set_title(' Forecast', fontsize=22)
ax.set_ylabel('Monthly Milk Production ', fontsize=20)
ax.set_xlabel('Monthly [t]', fontsize=20)
ax.legend(prop={'size': 15})
ax.grid(True)

Predict method with confidence interval

To generate forecasts use the predict method.

The predict method takes two arguments: forecasts the next h (for horizon) and level.

  • h (int): represents the forecast h steps into the future. In this case, 12 months ahead.

  • level (list of floats): this optional parameter is used for probabilistic forecasting. Set the level (or confidence percentile) of your prediction interval. For example, level=[95] means that the model expects the real value to be inside that interval 95% of the times.

The forecast object here is a new data frame that includes a column with the name of the model and the y hat values, as well as columns for the uncertainty intervals.

This step should take less than 1 second.

sf.predict(h=horizon)
dsTheta
unique_id
11975-01-01838.559814
11975-02-01800.188232
11975-03-01893.472900
11975-10-01816.166931
11975-11-01786.962036
11975-12-01823.826538
forecast_df = sf.predict(h=horizon, level=[80,95]) 

forecast_df
dsThetaTheta-lo-80Theta-hi-80Theta-lo-95Theta-hi-95
unique_id
11975-01-01838.559814765.496155927.260071741.324280954.365540
11975-02-01800.188232701.729797898.807434640.785645944.996887
11975-03-01893.472900758.4810181006.847656705.1239011064.757324
11975-10-01816.166931611.404541991.667419539.7068481083.791870
11975-11-01786.962036561.990845969.637634487.9460751032.028931
11975-12-01823.826538591.2836911029.491577512.6748661101.965942

We can join the forecast result with the historical data using the pandas function pd.concat(), and then be able to use this result for graphing.

pd.concat([df, forecast_df]).set_index('ds')
yunique_idThetaTheta-lo-80Theta-hi-80Theta-lo-95Theta-hi-95
ds
1962-01-01589.01NaNNaNNaNNaNNaN
1962-02-01561.01NaNNaNNaNNaNNaN
1962-03-01640.01NaNNaNNaNNaNNaN
1975-10-01NaNNaN816.166931611.404541991.667419539.7068481083.791870
1975-11-01NaNNaN786.962036561.990845969.637634487.9460751032.028931
1975-12-01NaNNaN823.826538591.2836911029.491577512.6748661101.965942

Now let’s visualize the result of our forecast and the historical data of our time series, also let’s draw the confidence interval that we have obtained when making the prediction with 95% confidence.

def plot_forecasts(y_hist, y_true, y_pred, models):
    _, ax = plt.subplots(1, 1, figsize = (20, 7))
    y_true = y_true.merge(y_pred, how='left', on=['unique_id', 'ds'])
    df_plot = pd.concat([y_hist, y_true]).set_index('ds').tail(12*10)
    df_plot[['y'] + models].plot(ax=ax, linewidth=2 )
    colors = ['green']
    ax.fill_between(df_plot.index, 
                df_plot['Theta-lo-80'], 
                df_plot['Theta-hi-80'],
                alpha=.20,
                color='lime',
                label='Theta_level_80')
    ax.fill_between(df_plot.index, 
                df_plot['Theta-lo-95'], 
                df_plot['Theta-hi-95'],
                alpha=.2,
                color='white',
                label='Theta_level_95')
    ax.set_title('', fontsize=22)
    ax.set_ylabel("Return", fontsize=20)
    ax.set_xlabel('Month-Days', fontsize=20)
    ax.legend(prop={'size': 15})
    ax.grid(True)
    plt.show()
plot_forecasts(train, test, forecast_df, models=['Theta'])

Let’s plot the same graph using the plot function that comes in Statsforecast, as shown below.

sf.plot(df, forecast_df, level=[95])

Cross-validation

In previous steps, we’ve taken our historical data to predict the future. However, to asses its accuracy we would also like to know how the model would have performed in the past. To assess the accuracy and robustness of your models on your data perform Cross-Validation.

With time series data, Cross Validation is done by defining a sliding window across the historical data and predicting the period following it. This form of cross-validation allows us to arrive at a better estimation of our model’s predictive abilities across a wider range of temporal instances while also keeping the data in the training set contiguous as is required by our models.

The following graph depicts such a Cross Validation Strategy:

Perform time series cross-validation

Cross-validation of time series models is considered a best practice but most implementations are very slow. The statsforecast library implements cross-validation as a distributed operation, making the process less time-consuming to perform. If you have big datasets you can also perform Cross Validation in a distributed cluster using Ray, Dask or Spark.

In this case, we want to evaluate the performance of each model for the last 5 months (n_windows=5), forecasting every second months (step_size=12). Depending on your computer, this step should take around 1 min.

The cross_validation method from the StatsForecast class takes the following arguments.

  • df: training data frame

  • h (int): represents h steps into the future that are being forecasted. In this case, 12 months ahead.

  • step_size (int): step size between each window. In other words: how often do you want to run the forecasting processes.

  • n_windows(int): number of windows used for cross validation. In other words: what number of forecasting processes in the past do you want to evaluate.

crossvalidation_df = sf.cross_validation(df=train,
                                         h=horizon,
                                         step_size=12,
                                         n_windows=3)

The crossvaldation_df object is a new data frame that includes the following columns:

  • unique_id: index. If you dont like working with index just run crossvalidation_df.resetindex()
  • ds: datestamp or temporal index
  • cutoff: the last datestamp or temporal index for the n_windows.
  • y: true value
  • "model": columns with the model’s name and fitted value.

Evaluate Model

We can now compute the accuracy of the forecast using an appropiate accuracy metric. Here we’ll use the Root Mean Squared Error (RMSE). To do this, we first need to install datasetsforecast, a Python library developed by Nixtla that includes a function to compute the RMSE.

!pip install datasetsforecast
from datasetsforecast.losses import rmse

The function to compute the RMSE takes two arguments:

  1. The actual values.
  2. The forecasts, in this case, Theta.
rmse = rmse(crossvalidation_df['y'], crossvalidation_df["Theta"])
print("RMSE using cross-validation: ", rmse)
RMSE using cross-validation:  12.643162

As you have noticed, we have used the cross validation results to perform the evaluation of our model.

Now we are going to evaluate our model with the results of the predictions, we will use different types of metrics MAE, MAPE, MASE, RMSE, SMAPE to evaluate the accuracy.

from datasetsforecast.losses import mae, mape, mase, rmse, smape
def evaluate_performace(y_hist, y_true, y_pred, model):
    y_true = y_true.merge(y_pred, how='left', on=['unique_id', 'ds'])
    evaluation = {}
    evaluation[model] = {}
    for metric in [mase, mae, mape, rmse, smape]:
        metric_name = metric.__name__
        if metric_name == 'mase':
            evaluation[model][metric_name] = metric(y_true['y'].values, 
                                                y_true[model].values, 
                                                y_hist['y'].values, seasonality=12)
        else:
            evaluation[model][metric_name] = metric(y_true['y'].values, y_true[model].values)
    return pd.DataFrame(evaluation).T
evaluate_performace(train, test, Y_hat, model="Theta")
maemapemasermsesmape
Theta8.1112870.9648550.364789.7303470.965874

Acknowledgements

We would like to thank Naren Castellon for writing this tutorial.

References

  1. Jose A. Fiorucci, Tiago R. Pellegrini, Francisco Louzada, Fotios Petropoulos, Anne B. Koehler (2016). “Models for optimising the theta method and their relationship to state space models”. International Journal of Forecasting.
  2. Nixtla Parameters.
  3. Pandas available frequencies.
  4. Rob J. Hyndman and George Athanasopoulos (2018). “Forecasting principles and practice, Time series cross-validation”..
  5. Seasonal periods- Rob J Hyndman.