Some models create internal representations of the series that can be useful for other models to use as inputs. One example is the MSTL model, which decomposes the series into trend and seasonal components. This guide shows you how to use the mstl_decomposition function to extract those features for training and then use their future values for inference.

import os
from functools import partial

import pandas as pd
import statsforecast
from statsforecast import StatsForecast
from statsforecast.feature_engineering import mstl_decomposition
from statsforecast.models import ARIMA, MSTL
from utilsforecast.evaluation import evaluate
from utilsforecast.losses import smape, mase
# this makes it so that the outputs of the predict methods have the id as a column 
# instead of as the index
os.environ['NIXTLA_ID_AS_COL'] = '1'
df = pd.read_parquet('https://datasets-nixtla.s3.amazonaws.com/m4-hourly.parquet')
uids = df['unique_id'].unique()[:10]
df = df[df['unique_id'].isin(uids)]
df.head()
unique_iddsy
0H11605.0
1H12586.0
2H13586.0
3H14559.0
4H15511.0

Suppose that you want to use an ARIMA model to forecast your series but you want to incorporate the trend and seasonal components from the MSTL model as external regressors. You can define the MSTL model to use and then provide it to the mstl_decomposition function.

freq = 1
season_length = 24
horizon = 2 * season_length
valid = df.groupby('unique_id').tail(horizon)
train = df.drop(valid.index)
model = MSTL(season_length=24)
transformed_df, X_df = mstl_decomposition(train, model=model, freq=freq, h=horizon)

This generates the dataframe that we should use for training (with the trend and seasonal columns added), as well as the dataframe we should use to forecast.

transformed_df.head()
unique_iddsytrendseasonal
0H11605.0501.350550124.683643
1H12586.0506.42454987.115039
2H13586.0511.45373679.479564
3H14559.0516.43447445.616992
4H15511.0521.362991-10.940498
X_df.head()
unique_iddstrendseasonal
0H1701626.519191-22.738998
1H1702627.423191-108.992314
2H1703628.146391-148.675478
3H1704628.724951-183.715284
4H1705629.187799-208.038694

We can now train our ARIMA models and compute our forecasts.

sf = StatsForecast(
    models=[ARIMA(order=(1, 0, 1), season_length=season_length)],
    freq=freq
)
preds = sf.forecast(h=horizon, df=transformed_df, X_df=X_df)
preds.head()
unique_iddsARIMA
0H1701597.716125
1H1702513.244507
2H1703475.042450
3H1704441.217743
4H1705417.895813

We can now evaluate the performance.

def compute_evaluation(preds):
    full = preds.merge(valid, on=['unique_id', 'ds'])
    mase24 = partial(mase, seasonality=24)
    res = evaluate(full, metrics=[smape, mase24], train_df=train).groupby('metric')['ARIMA'].mean()
    res_smape = '{:.1%}'.format(res['smape'])
    res_mase = '{:.1f}'.format(res['mase'])
    return pd.Series({'mase': res_mase, 'smape': res_smape})
compute_evaluation(preds)
mase      1.0
smape    3.7%
dtype: object

And compare this with just using the series values.

preds_noexog = sf.forecast(h=horizon, df=train)
compute_evaluation(preds_noexog)
mase      2.3
smape    7.7%
dtype: object