CrostonSBA Model
Step-by-step guide on using the CrostonSBA Model
with Statsforecast
.
Table of Contents
- Introduction
- Croston SBA Model
- Loading libraries and data
- Explore data with the plot method
- Split the data into training and testing
- Implementation of CrostonSBA with StatsForecast
- Cross-validation
- Model evaluation
- References
Introduction
The Croston model is a method used to forecast time series with intermittent demand data, that is, data that has many periods of zero demand and only a few periods of non-zero demand. Croston’s approach was originally proposed by J.D. Croston in 1972. Subsequently, Syntetos and Boylan proposed an improvement to the original model in 2001, known as the Croston-SBA (Syntetos and Boylan Approximation).
The Croston-SBA model is based on the assumption that intermittent demand follows a binomial process. Instead of directly modeling demand, the focus is on modeling the intervals between demand periods. The model has two main components: one to model the intervals between demand periods (which are assumed to follow a Poisson distribution), and another to model the demands when they occur.
It is important to note that the Croston-SBA model assumes that the intervals between the non-zero demand periods are independent and follow a Poisson distribution. However, this model is an approximation and may not work well in all situations. It is advisable to evaluate its performance on historical data before using it in practice.
Croston SBA Model
The formula of SBA is very similar to the original Croston’s method, however, it apply a correction factor which reduce the error in the final estimate result.
if then
Otherwise
where
- Average demand per period
- Actual demand at period
- Time between two positive demand
- Demand size forecast for next period
- Forecast of demand interval
- Smoothing constant
Note: In Croston’s method, result often will present a considerable positive bias, whereas in SBA the bias is reduced, and sometimes will appear slightly negative bias.
Principals of the Croston SBA method
The Croston SBA (Syntetos and Boylan Approximate) method is a technique used for forecasting time series with intermittent or sporadic data. This methodology is based on the original Croston method, which was developed to forecast inventory demand in situations where data is sparse or not available at regular intervals.
The main properties of the Croston SBA method are the following:
-
Suitable for intermittent data: The Croston SBA method is especially useful when the data exhibits intermittent patterns, that is, periods of demand followed by periods of non-demand. Instead of treating the data as zero for non-demand periods, the Croston SBA method estimates demand occurrence rates and conditional demand rates.
-
Separation of frequency and level: One of the key features of the Croston SBA method is that it separates the frequency and level information in the demand data. This allows these two components to be modeled and forecasted separately, which can result in better predictions.
-
Estimation of occurrence and demand rates: The Croston SBA method uses a simple exponential smoothing technique to estimate conditional occurrence and demand rates. These rates are then used to forecast future demand.
-
Does not assume distribution of the data: Unlike some forecasting techniques that assume a specific distribution of the data, the Croston SBA method makes no assumptions about the distribution of demand. This makes it more flexible and applicable to a wide range of situations.
-
Does not require complete historical data: The Croston SBA method can work even when historical data is sparse or not available at regular intervals. This makes it an attractive option when it comes to forecasting intermittent demand with limited data.
It is important to note that the Croston SBA method is an approximation and may not be suitable for all cases. It is recommended to evaluate its performance in conjunction with other forecasting techniques and adapt it according to the specific characteristics of the data and the context of the problem.
In the Croston SBA method, the data series need not be stationary. The Croston SBA approach is suitable for forecasting time series with intermittent data, where periods of demand are interspersed with periods of non-demand.
The Croston SBA method is based on the estimation of occurrence rates and conditional demand rates, using simple exponential smoothing techniques. These rates are used to forecast future demand.
In the context of time series, stationarity refers to the property that the statistical properties of the series, such as the mean and variance, are constant over time. However, in the case of intermittent data, it is common for the series not to meet the assumptions of stationarity, since the demand can vary considerably in different periods of time.
The Croston SBA method is not based on the assumption of stationarity of the data series. Instead, it focuses on modeling the frequency and level of intermittent demand separately, using simple exponential smoothing techniques. This makes it possible to capture demand occurrence patterns and estimate conditional demand rates, without requiring the stationarity of the series.
Loading libraries and data
Tip
Statsforecast will be needed. To install, see instructions.
Next, we import plotting libraries and configure the plotting style.
import matplotlib.pyplot as plt
import seaborn as sns
from statsmodels.graphics.tsaplots import plot_acf, plot_pacf
plt.style.use('grayscale') # fivethirtyeight grayscale classic
plt.rcParams['lines.linewidth'] = 1.5
dark_style = {
'figure.facecolor': '#008080', # #212946
'axes.facecolor': '#008080',
'savefig.facecolor': '#008080',
'axes.grid': True,
'axes.grid.which': 'both',
'axes.spines.left': False,
'axes.spines.right': False,
'axes.spines.top': False,
'axes.spines.bottom': False,
'grid.color': '#000000', #2A3459
'grid.linewidth': '1',
'text.color': '0.9',
'axes.labelcolor': '0.9',
'xtick.color': '0.9',
'ytick.color': '0.9',
'font.size': 12 }
plt.rcParams.update(dark_style)
from pylab import rcParams
rcParams['figure.figsize'] = (18,7)
import pandas as pd
df=pd.read_csv("https://raw.githubusercontent.com/Naren8520/Serie-de-tiempo-con-Machine-Learning/main/Data/intermittend_demand2")
df.head()
date | sales | |
---|---|---|
0 | 2022-01-01 00:00:00 | 0 |
1 | 2022-01-01 01:00:00 | 10 |
2 | 2022-01-01 02:00:00 | 0 |
3 | 2022-01-01 03:00:00 | 0 |
4 | 2022-01-01 04:00:00 | 100 |
The input to StatsForecast is always a data frame in long format with three columns: unique_id, ds and y:
-
The
unique_id
(string, int or category) represents an identifier for the series. -
The
ds
(datestamp) column should be of a format expected by Pandas, ideally YYYY-MM-DD for a date or YYYY-MM-DD HH:MM:SS for a timestamp. -
The
y
(numeric) represents the measurement we wish to forecast.
df["unique_id"]="1"
df.columns=["ds", "y", "unique_id"]
df.head()
ds | y | unique_id | |
---|---|---|---|
0 | 2022-01-01 00:00:00 | 0 | 1 |
1 | 2022-01-01 01:00:00 | 10 | 1 |
2 | 2022-01-01 02:00:00 | 0 | 1 |
3 | 2022-01-01 03:00:00 | 0 | 1 |
4 | 2022-01-01 04:00:00 | 100 | 1 |
print(df.dtypes)
ds object
y int64
unique_id object
dtype: object
We can see that our time variable (ds)
is in an object format, we need
to convert to a date format
df["ds"] = pd.to_datetime(df["ds"])
Explore Data with the plot method
Plot some series using the plot method from the StatsForecast class. This method prints a random series from the dataset and is useful for basic EDA.
from statsforecast import StatsForecast
StatsForecast.plot(df)
Autocorrelation plots
Autocorrelation (ACF) and partial autocorrelation (PACF) plots are statistical tools used to analyze time series. ACF charts show the correlation between the values of a time series and their lagged values, while PACF charts show the correlation between the values of a time series and their lagged values, after the effect of previous lagged values has been removed.
ACF and PACF charts can be used to identify the structure of a time series, which can be helpful in choosing a suitable model for the time series. For example, if the ACF chart shows a repeating peak and valley pattern, this indicates that the time series is stationary, meaning that it has the same statistical properties over time. If the PACF chart shows a pattern of rapidly decreasing spikes, this indicates that the time series is invertible, meaning it can be reversed to get a stationary time series.
The importance of the ACF and PACF charts is that they can help analysts better understand the structure of a time series. This understanding can be helpful in choosing a suitable model for the time series, which can improve the ability to predict future values of the time series.
To analyze ACF and PACF charts:
- Look for patterns in charts. Common patterns include repeating peaks and valleys, sawtooth patterns, and plateau patterns.
- Compare ACF and PACF charts. The PACF chart generally has fewer spikes than the ACF chart.
- Consider the length of the time series. ACF and PACF charts for longer time series will have more spikes.
- Use a confidence interval. The ACF and PACF plots also show confidence intervals for the autocorrelation values. If an autocorrelation value is outside the confidence interval, it is likely to be significant.
fig, axs = plt.subplots(nrows=1, ncols=2)
plot_acf(df["y"], lags=30, ax=axs[0],color="fuchsia")
axs[0].set_title("Autocorrelation");
plot_pacf(df["y"], lags=30, ax=axs[1],color="lime")
axs[1].set_title('Partial Autocorrelation')
plt.show();
Decomposition of the time series
How to decompose a time series and why?
In time series analysis to forecast new values, it is very important to know past data. More formally, we can say that it is very important to know the patterns that values follow over time. There can be many reasons that cause our forecast values to fall in the wrong direction. Basically, a time series consists of four components. The variation of those components causes the change in the pattern of the time series. These components are:
- Level: This is the primary value that averages over time.
- Trend: The trend is the value that causes increasing or decreasing patterns in a time series.
- Seasonality: This is a cyclical event that occurs in a time series for a short time and causes short-term increasing or decreasing patterns in a time series.
- Residual/Noise: These are the random variations in the time series.
Combining these components over time leads to the formation of a time series. Most time series consist of level and noise/residual and trend or seasonality are optional values.
If seasonality and trend are part of the time series, then there will be effects on the forecast value. As the pattern of the forecasted time series may be different from the previous time series.
The combination of the components in time series can be of two types: * Additive * Multiplicative
Additive time series
If the components of the time series are added to make the time series. Then the time series is called the additive time series. By visualization, we can say that the time series is additive if the increasing or decreasing pattern of the time series is similar throughout the series. The mathematical function of any additive time series can be represented by:
Multiplicative time series
If the components of the time series are multiplicative together, then the time series is called a multiplicative time series. For visualization, if the time series is having exponential growth or decline with time, then the time series can be considered as the multiplicative time series. The mathematical function of the multiplicative time series can be represented as.
from statsmodels.tsa.seasonal import seasonal_decompose
from plotly.subplots import make_subplots
import plotly.graph_objects as go
def plotSeasonalDecompose(
x,
model='additive',
filt=None,
period=None,
two_sided=True,
extrapolate_trend=0,
title="Seasonal Decomposition"):
result = seasonal_decompose(
x, model=model, filt=filt, period=period,
two_sided=two_sided, extrapolate_trend=extrapolate_trend)
fig = make_subplots(
rows=4, cols=1,
subplot_titles=["Observed", "Trend", "Seasonal", "Residuals"])
for idx, col in enumerate(['observed', 'trend', 'seasonal', 'resid']):
fig.add_trace(
go.Scatter(x=result.observed.index, y=getattr(result, col), mode='lines'),
row=idx+1, col=1,
)
return fig
plotSeasonalDecompose(
df["y"],
model="additive",
period=24,
title="Seasonal Decomposition")
Split the data into training and testing
Let’s divide our data into sets 1. Data to train our
Croston SBA Model
. 2. Data to test our model
For the test data we will use the last 500 Hours to test and evaluate the performance of our model.
train = df[df.ds<='2023-01-31 19:00:00']
test = df[df.ds>'2023-01-31 19:00:00']
train.shape, test.shape
((9500, 3), (500, 3))
Now let’s plot the training data and the test data.
sns.lineplot(train,x="ds", y="y", label="Train", linestyle="--",linewidth=2)
sns.lineplot(test, x="ds", y="y", label="Test", linewidth=2, color="yellow")
plt.title("Store visit");
plt.xlabel("Hours")
plt.show()
Implementation of CrostonSBA with StatsForecast
To also know more about the parameters of the functions of the
CrostonSBA Model
, they are listed below. For more information, visit
the
documentation.
alias : str
Custom name of the model.
Load libraries
from statsforecast import StatsForecast
from statsforecast.models import CrostonSBA
Instantiating Model
Import and instantiate the models. Setting the argument is sometimes
tricky. This article on Seasonal
periods by the
master, Rob Hyndmann, can be useful for season_length
.
season_length = 24 # Hourly data
horizon = len(test) # number of predictions
# We call the model that we are going to use
models = [CrostonSBA()]
We fit the models by instantiating a new StatsForecast object with the following parameters:
models: a list of models. Select the models you want from models and import them.
-
freq:
a string indicating the frequency of the data. (See pandas’ available frequencies.) -
n_jobs:
n_jobs: int, number of jobs used in the parallel processing, use -1 for all cores. -
fallback_model:
a model to be used if a model fails.
Any settings are passed into the constructor. Then you call its fit method and pass in the historical data frame.
sf = StatsForecast(df=df,
models=models,
freq='H',
n_jobs=-1)
Fit the Model
sf.fit()
StatsForecast(models=[CrostonSBA])
Let’s see the results of our Croston SBA Model
. We can observe it with
the following instruction:
result=sf.fitted_[0,0].model_
result
{'mean': array([22.426361], dtype=float32)}
Forecast Method
If you want to gain speed in productive settings where you have multiple
series or models we recommend using the
StatsForecast.forecast
method instead of .fit
and .predict
.
The main difference is that the .forecast
doest not store the fitted
values and is highly scalable in distributed environments.
The forecast method takes two arguments: forecasts next h
(horizon)
and level
.
h (int):
represents the forecast h steps into the future. In this case, 500 hours ahead.
The forecast object here is a new data frame that includes a column with
the name of the model and the y hat values, as well as columns for the
uncertainty intervals. Depending on your computer, this step should take
around 1min. (If you want to speed things up to a couple of seconds,
remove the AutoModels like
ARIMA
and
Theta
)
Y_hat = sf.forecast(horizon)
Y_hat
ds | CrostonSBA | |
---|---|---|
unique_id | ||
1 | 2023-02-21 16:00:00 | 22.426361 |
1 | 2023-02-21 17:00:00 | 22.426361 |
1 | 2023-02-21 18:00:00 | 22.426361 |
… | … | … |
1 | 2023-03-14 09:00:00 | 22.426361 |
1 | 2023-03-14 10:00:00 | 22.426361 |
1 | 2023-03-14 11:00:00 | 22.426361 |
Y_hat=Y_hat.reset_index()
Y_hat
unique_id | ds | CrostonSBA | |
---|---|---|---|
0 | 1 | 2023-02-21 16:00:00 | 22.426361 |
1 | 1 | 2023-02-21 17:00:00 | 22.426361 |
2 | 1 | 2023-02-21 18:00:00 | 22.426361 |
… | … | … | … |
497 | 1 | 2023-03-14 09:00:00 | 22.426361 |
498 | 1 | 2023-03-14 10:00:00 | 22.426361 |
499 | 1 | 2023-03-14 11:00:00 | 22.426361 |
Y_hat1 = pd.concat([df,Y_hat])
Y_hat1
ds | y | unique_id | CrostonSBA | |
---|---|---|---|---|
0 | 2022-01-01 00:00:00 | 0.0 | 1 | NaN |
1 | 2022-01-01 01:00:00 | 10.0 | 1 | NaN |
2 | 2022-01-01 02:00:00 | 0.0 | 1 | NaN |
… | … | … | … | … |
497 | 2023-03-14 09:00:00 | NaN | 1 | 22.426361 |
498 | 2023-03-14 10:00:00 | NaN | 1 | 22.426361 |
499 | 2023-03-14 11:00:00 | NaN | 1 | 22.426361 |
fig, ax = plt.subplots(1, 1)
plot_df = pd.concat([df, Y_hat1]).set_index('ds')
plot_df['y'].plot(ax=ax, linewidth=2)
plot_df["CrostonSBA"].plot(ax=ax, linewidth=2, color="yellow")
ax.set_title(' Forecast', fontsize=22)
ax.set_ylabel("Store visit (Hourly data)", fontsize=20)
ax.set_xlabel('Hours', fontsize=20)
ax.legend(prop={'size': 15})
ax.grid(True)
Predict method with confidence interval
To generate forecasts use the predict method.
The predict method takes two arguments: forecasts the next h
(for
horizon) and level
.
h (int):
represents the forecast h steps into the future. In this case, 500 hours ahead.
The forecast object here is a new data frame that includes a column with the name of the model and the y hat values, as well as columns for the uncertainty intervals.
This step should take less than 1 second.
forecast_df = sf.predict(h=horizon)
forecast_df
ds | CrostonSBA | |
---|---|---|
unique_id | ||
1 | 2023-02-21 16:00:00 | 22.426361 |
1 | 2023-02-21 17:00:00 | 22.426361 |
1 | 2023-02-21 18:00:00 | 22.426361 |
… | … | … |
1 | 2023-03-14 09:00:00 | 22.426361 |
1 | 2023-03-14 10:00:00 | 22.426361 |
1 | 2023-03-14 11:00:00 | 22.426361 |
We can join the forecast result with the historical data using the
pandas function pd.concat()
, and then be able to use this result for
graphing.
pd.concat([df, forecast_df]).set_index('ds')
y | unique_id | CrostonSBA | |
---|---|---|---|
ds | |||
2022-01-01 00:00:00 | 0.0 | 1 | NaN |
2022-01-01 01:00:00 | 10.0 | 1 | NaN |
2022-01-01 02:00:00 | 0.0 | 1 | NaN |
… | … | … | … |
2023-03-14 09:00:00 | NaN | NaN | 22.426361 |
2023-03-14 10:00:00 | NaN | NaN | 22.426361 |
2023-03-14 11:00:00 | NaN | NaN | 22.426361 |
df_plot= pd.concat([df, forecast_df]).set_index('ds').tail(5000)
df_plot
y | unique_id | CrostonSBA | |
---|---|---|---|
ds | |||
2022-08-18 04:00:00 | 0.0 | 1 | NaN |
2022-08-18 05:00:00 | 80.0 | 1 | NaN |
2022-08-18 06:00:00 | 0.0 | 1 | NaN |
… | … | … | … |
2023-03-14 09:00:00 | NaN | NaN | 22.426361 |
2023-03-14 10:00:00 | NaN | NaN | 22.426361 |
2023-03-14 11:00:00 | NaN | NaN | 22.426361 |
Now let’s visualize the result of our forecast and the historical data of our time series.
plt.plot(df_plot['y'],label="Actual", linewidth=2.5)
plt.plot(df_plot['CrostonSBA'], label="Croston SBA", color="yellow") # '-', '--', '-.', ':',
plt.title("Store visit (Hourly data)");
plt.xlabel("Hourly")
plt.ylabel("Store visit")
plt.legend()
plt.show();
Let’s plot the same graph using the plot function that comes in
Statsforecast
, as shown below.
sf.plot(df, forecast_df)
Cross-validation
In previous steps, we’ve taken our historical data to predict the future. However, to asses its accuracy we would also like to know how the model would have performed in the past. To assess the accuracy and robustness of your models on your data perform Cross-Validation.
With time series data, Cross Validation is done by defining a sliding window across the historical data and predicting the period following it. This form of cross-validation allows us to arrive at a better estimation of our model’s predictive abilities across a wider range of temporal instances while also keeping the data in the training set contiguous as is required by our models.
The following graph depicts such a Cross Validation Strategy:
Perform time series cross-validation
Cross-validation of time series models is considered a best practice but most implementations are very slow. The statsforecast library implements cross-validation as a distributed operation, making the process less time-consuming to perform. If you have big datasets you can also perform Cross Validation in a distributed cluster using Ray, Dask or Spark.
In this case, we want to evaluate the performance of each model for the
last 5 months (n_windows=)
, forecasting every second months
(step_size=50)
. Depending on your computer, this step should take
around 1 min.
The cross_validation method from the StatsForecast class takes the following arguments.
-
df:
training data frame -
h (int):
represents h steps into the future that are being forecasted. In this case, 500 hours ahead. -
step_size (int):
step size between each window. In other words: how often do you want to run the forecasting processes. -
n_windows(int):
number of windows used for cross validation. In other words: what number of forecasting processes in the past do you want to evaluate.
crossvalidation_df = sf.cross_validation(df=df,
h=horizon,
step_size=50,
n_windows=5)
The crossvaldation_df object is a new data frame that includes the following columns:
unique_id:
index. If you dont like working with index just runcrossvalidation_df.resetindex()
.ds:
datestamp or temporal indexcutoff:
the last datestamp or temporal index for then_windows
.y:
true valuemodel:
columns with the model’s name and fitted value.
crossvalidation_df
ds | cutoff | y | CrostonSBA | |
---|---|---|---|---|
unique_id | ||||
1 | 2023-01-23 12:00:00 | 2023-01-23 11:00:00 | 0.0 | 22.473040 |
1 | 2023-01-23 13:00:00 | 2023-01-23 11:00:00 | 0.0 | 22.473040 |
1 | 2023-01-23 14:00:00 | 2023-01-23 11:00:00 | 0.0 | 22.473040 |
… | … | … | … | … |
1 | 2023-02-21 13:00:00 | 2023-01-31 19:00:00 | 60.0 | 26.047497 |
1 | 2023-02-21 14:00:00 | 2023-01-31 19:00:00 | 20.0 | 26.047497 |
1 | 2023-02-21 15:00:00 | 2023-01-31 19:00:00 | 20.0 | 26.047497 |
Model Evaluation
We can now compute the accuracy of the forecast using an appropiate
accuracy metric. Here we’ll use the Root Mean Squared Error (RMSE). To
do this, we first need to install datasetsforecast
, a Python library
developed by Nixtla that includes a function to compute the RMSE.
!pip install datasetsforecast
from datasetsforecast.losses import rmse
The function to compute the RMSE takes two arguments:
- The actual values.
- The forecasts, in this case,
Croston SBA Model
.
rmse = rmse(crossvalidation_df['y'], crossvalidation_df["CrostonSBA"])
print("RMSE using cross-validation: ", rmse)
RMSE using cross-validation: 47.809525
Acknowledgements
We would like to thank Naren Castellon for writing this tutorial.
References
- Changquan Huang • Alla Petukhina. Springer series (2022). Applied Time Series Analysis and Forecasting with Python.
- Ivan Svetunkov. Forecasting and Analytics with the Augmented Dynamic Adaptive Model (ADAM)
- James D. Hamilton. Time Series Analysis Princeton University Press, Princeton, New Jersey, 1st Edition, 1994.
- Nixtla Parameters.
- Pandas available frequencies.
- Rob J. Hyndman and George Athanasopoulos (2018). “Forecasting principles and practice, Time series cross-validation”..
- Seasonal periods- Rob J Hyndman.