Documentation Index
Fetch the complete documentation index at: https://nixtlaverse.nixtla.io/llms.txt
Use this file to discover all available pages before exploring further.
Step-by-step guide on using the CrostonSBA Model with
Statsforecast.
During this walkthrough, we will become familiar with the main
StatsForecast class and some relevant methods such as
StatsForecast.plot, StatsForecast.forecast and
StatsForecast.cross_validation in other.
The text in this article is largely taken from: 1. Changquan Huang •
Alla Petukhina. Springer series (2022). Applied Time Series Analysis and
Forecasting with
Python. 2.
Ivan Svetunkov. Forecasting and Analytics with the Augmented Dynamic
Adaptive Model (ADAM) 3. James D.
Hamilton. Time Series Analysis Princeton University Press, Princeton,
New Jersey, 1st Edition,
1994.
4. Rob J. Hyndman and George Athanasopoulos (2018). “Forecasting
Principles and Practice (3rd ed)”.
Table of Contents
Introduction
The Croston model is a method used to forecast time series with
intermittent demand data, that is, data that has many periods of zero
demand and only a few periods of non-zero demand. Croston’s approach was
originally proposed by J.D. Croston in 1972. Subsequently, Syntetos and
Boylan proposed an improvement to the original model in 2001, known as
the Croston-SBA (Syntetos and Boylan Approximation).
The Croston-SBA model is based on the assumption that intermittent
demand follows a binomial process. Instead of directly modeling demand,
the focus is on modeling the intervals between demand periods. The model
has two main components: one to model the intervals between demand
periods (which are assumed to follow a Poisson distribution), and
another to model the demands when they occur.
It is important to note that the Croston-SBA model assumes that the
intervals between the non-zero demand periods are independent and follow
a Poisson distribution. However, this model is an approximation and may
not work well in all situations. It is advisable to evaluate its
performance on historical data before using it in practice.
Croston SBA Model
The formula of SBA is very similar to the original Croston’s method,
however, it apply a correction factor which reduce the error in the
final estimate result.
if Zt=0 then
Zt′=Zt−1′
Pt′=Pt−1′
Otherwise
Zt′=αZt+(1−α)Zt−1′
Pt′=αPt+(1−α)Pt−1′ where 0<α<1
Yt′=(1−2α)Pt′Zt′
where
- Yt′: Average demand per period
- Zt: Actual demand at period t
- Zt′: Time between two positive demand
- P: Demand size forecast for next period
- Pt′: Forecast of demand interval
- α: Smoothing constant
Note: In Croston’s method, result often will present a considerable
positive bias, whereas in SBA the bias is reduced, and sometimes will
appear slightly negative bias.
Principals of the Croston SBA method
The Croston SBA (Syntetos and Boylan Approximate) method is a technique
used for forecasting time series with intermittent or sporadic data.
This methodology is based on the original Croston method, which was
developed to forecast inventory demand in situations where data is
sparse or not available at regular intervals.
The main properties of the Croston SBA method are the following:
-
Suitable for intermittent data: The Croston SBA method is especially
useful when the data exhibits intermittent patterns, that is,
periods of demand followed by periods of non-demand. Instead of
treating the data as zero for non-demand periods, the Croston SBA
method estimates demand occurrence rates and conditional demand
rates.
-
Separation of frequency and level: One of the key features of the
Croston SBA method is that it separates the frequency and level
information in the demand data. This allows these two components to
be modeled and forecasted separately, which can result in better
predictions.
-
Estimation of occurrence and demand rates: The Croston SBA method
uses a simple exponential smoothing technique to estimate
conditional occurrence and demand rates. These rates are then used
to forecast future demand.
-
Does not assume distribution of the data: Unlike some forecasting
techniques that assume a specific distribution of the data, the
Croston SBA method makes no assumptions about the distribution of
demand. This makes it more flexible and applicable to a wide range
of situations.
-
Does not require complete historical data: The Croston SBA method
can work even when historical data is sparse or not available at
regular intervals. This makes it an attractive option when it comes
to forecasting intermittent demand with limited data.
It is important to note that the Croston SBA method is an approximation
and may not be suitable for all cases. It is recommended to evaluate its
performance in conjunction with other forecasting techniques and adapt
it according to the specific characteristics of the data and the context
of the problem.
In the Croston SBA method, the data series need not be stationary. The
Croston SBA approach is suitable for forecasting time series with
intermittent data, where periods of demand are interspersed with periods
of non-demand.
The Croston SBA method is based on the estimation of occurrence rates
and conditional demand rates, using simple exponential smoothing
techniques. These rates are used to forecast future demand.
In the context of time series, stationarity refers to the property that
the statistical properties of the series, such as the mean and variance,
are constant over time. However, in the case of intermittent data, it is
common for the series not to meet the assumptions of stationarity, since
the demand can vary considerably in different periods of time.
The Croston SBA method is not based on the assumption of stationarity of
the data series. Instead, it focuses on modeling the frequency and level
of intermittent demand separately, using simple exponential smoothing
techniques. This makes it possible to capture demand occurrence patterns
and estimate conditional demand rates, without requiring the
stationarity of the series.
Loading libraries and data
Tip
Statsforecast will be needed. To install, see
instructions.
Next, we import plotting libraries and configure the plotting style.
import matplotlib.pyplot as plt
import seaborn as sns
from statsmodels.graphics.tsaplots import plot_acf, plot_pacf
plt.style.use('grayscale') # fivethirtyeight grayscale classic
plt.rcParams['lines.linewidth'] = 1.5
dark_style = {
'figure.facecolor': '#008080', # #212946
'axes.facecolor': '#008080',
'savefig.facecolor': '#008080',
'axes.grid': True,
'axes.grid.which': 'both',
'axes.spines.left': False,
'axes.spines.right': False,
'axes.spines.top': False,
'axes.spines.bottom': False,
'grid.color': '#000000', #2A3459
'grid.linewidth': '1',
'text.color': '0.9',
'axes.labelcolor': '0.9',
'xtick.color': '0.9',
'ytick.color': '0.9',
'font.size': 12 }
plt.rcParams.update(dark_style)
from pylab import rcParams
rcParams['figure.figsize'] = (18,7)
import pandas as pd
df=pd.read_csv("https://raw.githubusercontent.com/Naren8520/Serie-de-tiempo-con-Machine-Learning/main/Data/intermittend_demand2")
df.head()
| date | sales |
|---|
| 0 | 2022-01-01 00:00:00 | 0 |
| 1 | 2022-01-01 01:00:00 | 10 |
| 2 | 2022-01-01 02:00:00 | 0 |
| 3 | 2022-01-01 03:00:00 | 0 |
| 4 | 2022-01-01 04:00:00 | 100 |
The input to StatsForecast is always a data frame in long format with
three columns: unique_id, ds and y:
-
The
unique_id (string, int or category) represents an identifier
for the series.
-
The
ds (datestamp) column should be of a format expected by
Pandas, ideally YYYY-MM-DD for a date or YYYY-MM-DD HH:MM:SS for a
timestamp.
-
The
y (numeric) represents the measurement we wish to forecast.
df["unique_id"]="1"
df.columns=["ds", "y", "unique_id"]
df.head()
| ds | y | unique_id |
|---|
| 0 | 2022-01-01 00:00:00 | 0 | 1 |
| 1 | 2022-01-01 01:00:00 | 10 | 1 |
| 2 | 2022-01-01 02:00:00 | 0 | 1 |
| 3 | 2022-01-01 03:00:00 | 0 | 1 |
| 4 | 2022-01-01 04:00:00 | 100 | 1 |
ds object
y int64
unique_id object
dtype: object
We can see that our time variable (ds) is in an object format, we need
to convert to a date format
df["ds"] = pd.to_datetime(df["ds"])
Explore Data with the plot method
Plot some series using the plot method from the StatsForecast class.
This method prints a random series from the dataset and is useful for
basic EDA.
from statsforecast import StatsForecast
StatsForecast.plot(df)
Autocorrelation plots
Autocorrelation (ACF) and partial autocorrelation (PACF) plots are
statistical tools used to analyze time series. ACF charts show the
correlation between the values of a time series and their lagged values,
while PACF charts show the correlation between the values of a time
series and their lagged values, after the effect of previous lagged
values has been removed.
ACF and PACF charts can be used to identify the structure of a time
series, which can be helpful in choosing a suitable model for the time
series. For example, if the ACF chart shows a repeating peak and valley
pattern, this indicates that the time series is stationary, meaning that
it has the same statistical properties over time. If the PACF chart
shows a pattern of rapidly decreasing spikes, this indicates that the
time series is invertible, meaning it can be reversed to get a
stationary time series.
The importance of the ACF and PACF charts is that they can help analysts
better understand the structure of a time series. This understanding can
be helpful in choosing a suitable model for the time series, which can
improve the ability to predict future values of the time series.
To analyze ACF and PACF charts:
- Look for patterns in charts. Common patterns include repeating peaks
and valleys, sawtooth patterns, and plateau patterns.
- Compare ACF and PACF charts. The PACF chart generally has fewer
spikes than the ACF chart.
- Consider the length of the time series. ACF and PACF charts for
longer time series will have more spikes.
- Use a confidence interval. The ACF and PACF plots also show
confidence intervals for the autocorrelation values. If an
autocorrelation value is outside the confidence interval, it is
likely to be significant.
fig, axs = plt.subplots(nrows=1, ncols=2)
plot_acf(df["y"], lags=30, ax=axs[0],color="fuchsia")
axs[0].set_title("Autocorrelation");
plot_pacf(df["y"], lags=30, ax=axs[1],color="lime")
axs[1].set_title('Partial Autocorrelation')
plt.show();
Decomposition of the time series
How to decompose a time series and why?
In time series analysis to forecast new values, it is very important to
know past data. More formally, we can say that it is very important to
know the patterns that values follow over time. There can be many
reasons that cause our forecast values to fall in the wrong direction.
Basically, a time series consists of four components. The variation of
those components causes the change in the pattern of the time series.
These components are:
- Level: This is the primary value that averages over time.
- Trend: The trend is the value that causes increasing or
decreasing patterns in a time series.
- Seasonality: This is a cyclical event that occurs in a time
series for a short time and causes short-term increasing or
decreasing patterns in a time series.
- Residual/Noise: These are the random variations in the time
series.
Combining these components over time leads to the formation of a time
series. Most time series consist of level and noise/residual and trend
or seasonality are optional values.
If seasonality and trend are part of the time series, then there will be
effects on the forecast value. As the pattern of the forecasted time
series may be different from the previous time series.
The combination of the components in time series can be of two types: *
Additive * Multiplicative
Additive time series
If the components of the time series are added to make the time series.
Then the time series is called the additive time series. By
visualization, we can say that the time series is additive if the
increasing or decreasing pattern of the time series is similar
throughout the series. The mathematical function of any additive time
series can be represented by:
y(t)=level+Trend+seasonality+noise
Multiplicative time series
If the components of the time series are multiplicative together, then
the time series is called a multiplicative time series. For
visualization, if the time series is having exponential growth or
decline with time, then the time series can be considered as the
multiplicative time series. The mathematical function of the
multiplicative time series can be represented as.
y(t)=Level∗Trend∗seasonality∗Noise
from statsmodels.tsa.seasonal import seasonal_decompose
from plotly.subplots import make_subplots
import plotly.graph_objects as go
def plotSeasonalDecompose(
x,
model='additive',
filt=None,
period=None,
two_sided=True,
extrapolate_trend=0,
title="Seasonal Decomposition"):
result = seasonal_decompose(
x, model=model, filt=filt, period=period,
two_sided=two_sided, extrapolate_trend=extrapolate_trend)
fig = make_subplots(
rows=4, cols=1,
subplot_titles=["Observed", "Trend", "Seasonal", "Residuals"])
for idx, col in enumerate(['observed', 'trend', 'seasonal', 'resid']):
fig.add_trace(
go.Scatter(x=result.observed.index, y=getattr(result, col), mode='lines'),
row=idx+1, col=1,
)
return fig
plotSeasonalDecompose(
df["y"],
model="additive",
period=24,
title="Seasonal Decomposition")
Split the data into training and testing
Let’s divide our data into sets 1. Data to train our
Croston SBA Model. 2. Data to test our model
For the test data we will use the last 500 Hours to test and evaluate
the performance of our model.
train = df[df.ds<='2023-01-31 19:00:00']
test = df[df.ds>'2023-01-31 19:00:00']
Implementation of CrostonSBA with StatsForecast
Load libraries
from statsforecast import StatsForecast
from statsforecast.models import CrostonSBA
Instantiating Model
Import and instantiate the models. Setting the argument is sometimes
tricky. This article on Seasonal
periods by the
master, Rob Hyndmann, can be useful for season_length.
season_length = 24 # Hourly data
horizon = len(test) # number of predictions
# We call the model that we are going to use
models = [CrostonSBA()]
We fit the models by instantiating a new StatsForecast object with the
following parameters:
models: a list of models. Select the models you want from models and
import them.
-
freq: a string indicating the frequency of the data. (See pandas’
available
frequencies.)
-
n_jobs: n_jobs: int, number of jobs used in the parallel
processing, use -1 for all cores.
-
fallback_model: a model to be used if a model fails.
Any settings are passed into the constructor. Then you call its fit
method and pass in the historical data frame.
sf = StatsForecast(models=models, freq='h')
Fit the Model
StatsForecast(models=[CrostonSBA])
Let’s see the results of our Croston SBA Model. We can observe it with
the following instruction:
result=sf.fitted_[0,0].model_
result
{'mean': array([26.04749601]),
'fitted': array([ nan, 0. , 4.75 , ..., 29.088629, 29.088629,
29.088629], dtype=float32),
'sigma': np.float32(49.512943)}
Forecast Method
If you want to gain speed in productive settings where you have multiple
series or models we recommend using the StatsForecast.forecast method
instead of .fit and .predict.
The main difference is that the .forecast doest not store the fitted
values and is highly scalable in distributed environments.
The forecast method takes two arguments: forecasts next h (horizon)
and level.
h (int): represents the forecast h steps into the future. In this
case, 500 hours ahead.
The forecast object here is a new data frame that includes a column with
the name of the model and the y hat values, as well as columns for the
uncertainty intervals. Depending on your computer, this step should take
around 1min. (If you want to speed things up to a couple of seconds,
remove the AutoModels like ARIMA and Theta)
Y_hat = sf.forecast(df=train, h=horizon)
Y_hat
| unique_id | ds | CrostonSBA |
|---|
| 0 | 1 | 2023-01-31 20:00:00 | 26.047497 |
| 1 | 1 | 2023-01-31 21:00:00 | 26.047497 |
| 2 | 1 | 2023-01-31 22:00:00 | 26.047497 |
| … | … | … | … |
| 497 | 1 | 2023-02-21 13:00:00 | 26.047497 |
| 498 | 1 | 2023-02-21 14:00:00 | 26.047497 |
| 499 | 1 | 2023-02-21 15:00:00 | 26.047497 |
sf.plot(train, Y_hat, max_insample_length=500)
Predict method with confidence interval
To generate forecasts use the predict method.
The predict method takes two arguments: forecasts the next h (for
horizon) and level.
h (int): represents the forecast h steps into the future. In this
case, 500 hours ahead.
The forecast object here is a new data frame that includes a column with
the name of the model and the y hat values, as well as columns for the
uncertainty intervals.
This step should take less than 1 second.
forecast_df = sf.predict(h=horizon)
forecast_df
| unique_id | ds | CrostonSBA |
|---|
| 0 | 1 | 2023-01-31 20:00:00 | 26.047497 |
| 1 | 1 | 2023-01-31 21:00:00 | 26.047497 |
| 2 | 1 | 2023-01-31 22:00:00 | 26.047497 |
| … | … | … | … |
| 497 | 1 | 2023-02-21 13:00:00 | 26.047497 |
| 498 | 1 | 2023-02-21 14:00:00 | 26.047497 |
| 499 | 1 | 2023-02-21 15:00:00 | 26.047497 |
Cross-validation
In previous steps, we’ve taken our historical data to predict the
future. However, to asses its accuracy we would also like to know how
the model would have performed in the past. To assess the accuracy and
robustness of your models on your data perform Cross-Validation.
With time series data, Cross Validation is done by defining a sliding
window across the historical data and predicting the period following
it. This form of cross-validation allows us to arrive at a better
estimation of our model’s predictive abilities across a wider range of
temporal instances while also keeping the data in the training set
contiguous as is required by our models.
The following graph depicts such a Cross Validation Strategy:
Cross-validation of time series models is considered a best practice but
most implementations are very slow. The statsforecast library implements
cross-validation as a distributed operation, making the process less
time-consuming to perform. If you have big datasets you can also perform
Cross Validation in a distributed cluster using Ray, Dask or Spark.
In this case, we want to evaluate the performance of each model for the
last 5 months (n_windows=), forecasting every second months
(step_size=50). Depending on your computer, this step should take
around 1 min.
The cross_validation method from the StatsForecast class takes the
following arguments.
-
df: training data frame
-
h (int): represents h steps into the future that are being
forecasted. In this case, 500 hours ahead.
-
step_size (int): step size between each window. In other words:
how often do you want to run the forecasting processes.
-
n_windows(int): number of windows used for cross validation. In
other words: what number of forecasting processes in the past do you
want to evaluate.
crossvalidation_df = sf.cross_validation(df=df,
h=horizon,
step_size=50,
n_windows=5)
The crossvaldation_df object is a new data frame that includes the
following columns:
unique_id: series identifier
ds: datestamp or temporal index
cutoff: the last datestamp or temporal index for the n_windows.
y: true value
model: columns with the model’s name and fitted value.
| unique_id | ds | cutoff | y | CrostonSBA |
|---|
| 0 | 1 | 2023-01-23 12:00:00 | 2023-01-23 11:00:00 | 0.0 | 22.473040 |
| 1 | 1 | 2023-01-23 13:00:00 | 2023-01-23 11:00:00 | 0.0 | 22.473040 |
| 2 | 1 | 2023-01-23 14:00:00 | 2023-01-23 11:00:00 | 0.0 | 22.473040 |
| … | … | … | … | … | … |
| 2497 | 1 | 2023-02-21 13:00:00 | 2023-01-31 19:00:00 | 60.0 | 26.047497 |
| 2498 | 1 | 2023-02-21 14:00:00 | 2023-01-31 19:00:00 | 20.0 | 26.047497 |
| 2499 | 1 | 2023-02-21 15:00:00 | 2023-01-31 19:00:00 | 20.0 | 26.047497 |
Model Evaluation
Now we are going to evaluate our model with the results of the
predictions, we will use different types of metrics MAE, MAPE, MASE,
RMSE, SMAPE to evaluate the accuracy.
from functools import partial
import utilsforecast.losses as ufl
from utilsforecast.evaluation import evaluate
evaluate(
test.merge(Y_hat),
metrics=[ufl.mae, ufl.mape, partial(ufl.mase, seasonality=season_length), ufl.rmse, ufl.smape],
train_df=train,
)
| unique_id | metric | CrostonSBA |
|---|
| 0 | 1 | mae | 33.112519 |
| 1 | 1 | mape | 0.626900 |
| 2 | 1 | mase | 0.789945 |
| 3 | 1 | rmse | 45.203519 |
| 4 | 1 | smape | 0.771529 |
References
- Changquan Huang • Alla Petukhina. Springer series (2022). Applied
Time Series Analysis and Forecasting with
Python.
- Ivan Svetunkov. Forecasting and Analytics with the Augmented
Dynamic Adaptive Model (ADAM)
- James D. Hamilton. Time Series Analysis Princeton University Press,
Princeton, New Jersey, 1st Edition,
1994.
- Nixtla CrostonSBA API
- Pandas available
frequencies.
- Rob J. Hyndman and George Athanasopoulos (2018). “Forecasting
Principles and Practice (3rd
ed)”.
- Seasonal periods- Rob J
Hyndman.