Prerequesites

This Guide assumes basic familiarity with NeuralForecast. For a minimal example visit the Quick Start

Follow this article for a step to step guide on building a production-ready forecasting pipeline for multiple time series.

During this guide you will gain familiary with the core NueralForecastclass and some relevant methods like NeuralForecast.fit, NeuralForecast.predict, and StatsForecast.cross_validation.

We will use a classical benchmarking dataset from the M4 competition. The dataset includes time series from different domains like finance, economy and sales. In this example, we will use a subset of the Hourly dataset.

We will model each time series globally Therefore, you will train a set of models for the whole dataset, and then select the best model for each individual time series. NeuralForecast focuses on speed, simplicity, and scalability, which makes it ideal for this task.

Outline:

  1. Install packages.
  2. Read the data.
  3. Explore the data.
  4. Train many models globally for the entire dataset.
  5. Evaluate the model’s performance using cross-validation.
  6. Select the best model for every unique time series.

Not Covered in this guide

Tip

You can use Colab to run this Notebook interactively

Warning

To reduce the computation time, it is recommended to use GPU. Using Colab, do not forget to activate it. Just go to Runtime>Change runtime type and select GPU as hardware accelerator.

1. Install libraries

We assume you have NeuralForecast already installed. Check this guide for instructions on how to install NeuralForecast.

Additionally, we will install s3fs to read from the S3 Filesystem of AWS, statsforecast for plotting, and datasetsforecast for common error metrics like MAE or MASE.

Install the necessary packages using pip install statsforecast s3fs datasetsforecast

! pip install statsforecast s3fs datasetsforecast
! pip install git+https://github.com/Nixtla/neuralforecast.git@main

2. Read the data

We will use pandas to read the M4 Hourly data set stored in a parquet file for efficiency. You can use ordinary pandas operations to read your data in other formats likes .csv.

The input to NeuralForecast is always a data frame in long format with three columns: unique_id, ds and y:

  • The unique_id (string, int or category) represents an identifier for the series.

  • The ds (datestamp or int) column should be either an integer indexing time or a datestampe ideally like YYYY-MM-DD for a date or YYYY-MM-DD HH:MM:SS for a timestamp.

  • The y (numeric) represents the measurement we wish to forecast. We will rename the

This data set already satisfies the requirement.

Depending on your internet connection, this step should take around 10 seconds.

import pandas as pd

Y_df = pd.read_parquet('https://datasets-nixtla.s3.amazonaws.com/m4-hourly.parquet')

Y_df.head()
unique_iddsy
0H11605.0
1H12586.0
2H13586.0
3H14559.0
4H15511.0

This dataset contains 414 unique series with 900 observations on average. For this example and reproducibility’s sake, we will select only 10 unique IDs. Depending on your processing infrastructure feel free to select more or less series.

Note

Processing time is dependent on the available computing resources. Running this example with the complete dataset takes around 10 minutes in a c5d.24xlarge (96 cores) instance from AWS.

uids = Y_df['unique_id'].unique()[:10] # Select 10 ids to make the example faster
Y_df = Y_df.query('unique_id in @uids').reset_index(drop=True)

3. Explore Data with the plot method of StatsForecast

Plot some series using the plot method from the StatsForecast class. This method prints 8 random series from the dataset and is useful for basic EDA.

Note

The StatsForecast.plot method uses Plotly as a defaul engine. You can change to MatPlotLib by setting engine="matplotlib".

from statsforecast import StatsForecast

StatsForecast.plot(Y_df, engine='matplotlib')
/Users/cchallu/opt/anaconda3/envs/neuralforecast/lib/python3.10/site-packages/statsforecast/core.py:25: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html
  from tqdm.autonotebook import tqdm

4. Train multiple models for many series

NeuralForecast can train many models on many time series globally and efficiently.

from ray import tune

from neuralforecast import NeuralForecast
from neuralforecast.auto import AutoNHITS, AutoLSTM
from neuralforecast.losses.pytorch import MQLoss

Each Auto model contains a default search space that was extensively tested on multiple large-scale datasets. Additionally, users can define specific search spaces tailored for particular datasets and tasks.

First, we create a custom search space for the AutoNHITS and AutoLSTM models. Search spaces are specified with dictionaries, where keys corresponds to the model’s hyperparameter and the value is a Tune function to specify how the hyperparameter will be sampled. For example, use randint to sample integers uniformly, and choice to sample values of a list.

config_nhits = {
    "input_size": tune.choice([48, 48*2, 48*3]),              # Length of input window
    "start_padding_enabled": True,
    "n_blocks": 5*[1],                                              # Length of input window
    "mlp_units": 5 * [[64, 64]],                                  # Length of input window
    "n_pool_kernel_size": tune.choice([5*[1], 5*[2], 5*[4],         
                                      [8, 4, 2, 1, 1]]),            # MaxPooling Kernel size
    "n_freq_downsample": tune.choice([[8, 4, 2, 1, 1],
                                      [1, 1, 1, 1, 1]]),            # Interpolation expressivity ratios
    "learning_rate": tune.loguniform(1e-4, 1e-2),                   # Initial Learning rate
    "scaler_type": tune.choice([None]),                             # Scaler type
    "max_steps": tune.choice([1000]),                               # Max number of training iterations
    "batch_size": tune.choice([1, 4, 10]),                          # Number of series in batch
    "windows_batch_size": tune.choice([128, 256, 512]),             # Number of windows in batch
    "random_seed": tune.randint(1, 20),                             # Random seed
}

config_lstm = {
    "input_size": tune.choice([48, 48*2, 48*3]),              # Length of input window
    "encoder_hidden_size": tune.choice([64, 128]),            # Hidden size of LSTM cells
    "encoder_n_layers": tune.choice([2,4]),                   # Number of layers in LSTM
    "learning_rate": tune.loguniform(1e-4, 1e-2),             # Initial Learning rate
    "scaler_type": tune.choice(['robust']),                   # Scaler type
    "max_steps": tune.choice([500, 1000]),                    # Max number of training iterations
    "batch_size": tune.choice([1, 4]),                        # Number of series in batch
    "random_seed": tune.randint(1, 20),                       # Random seed
}

To instantiate an Auto model you need to define:

  • h: forecasting horizon.
  • loss: training and validation loss from neuralforecast.losses.pytorch.
  • config: hyperparameter search space. If None, the Auto class will use a pre-defined suggested hyperparameter space.
  • search_alg: search algorithm (from tune.search), default is random search. Refer to https://docs.ray.io/en/latest/tune/api_docs/suggestion.html for more information on the different search algorithm options.
  • num_samples: number of configurations explored.

In this example we set horizon h as 48, use the MQLoss distribution loss for training and validation, and use the default search algorithm.

nf = NeuralForecast(
    models=[
        AutoNHITS(h=48, config=config_nhits, loss=MQLoss(), num_samples=5),
        AutoLSTM(h=48, config=config_lstm, loss=MQLoss(), num_samples=2),
    ],
    freq='H'
)

Tip

The number of samples, num_samples, is a crucial parameter! Larger values will usually produce better results as we explore more configurations in the search space, but it will increase training times. Larger search spaces will usually require more samples. As a general rule, we recommend setting num_samples higher than 20.

Next, we use the Neuralforecast class to train the Auto model. In this step, Auto models will automatically perform hyperparameter tuning training multiple models with different hyperparameters, producing the forecasts on the validation set, and evaluating them. The best configuration is selected based on the error on a validation set. Only the best model is stored and used during inference.

nf.fit(df=Y_df)
Global seed set to 15
Global seed set to 4

Next, we use the predict method to forecast the next 48 days using the optimal hyperparameters.

fcst_df = nf.predict()
fcst_df.columns = fcst_df.columns.str.replace('-median', '')
fcst_df.head()
Predicting DataLoader 0: 100%|██████████| 3/3 [00:00<00:00, 164.16it/s]
Predicting DataLoader 0: 100%|██████████| 3/3 [00:00<00:00, 13.89it/s]
dsAutoNHITSAutoNHITS-lo-90AutoNHITS-lo-80AutoNHITS-hi-80AutoNHITS-hi-90AutoLSTMAutoLSTM-lo-90AutoLSTM-lo-80AutoLSTM-hi-80AutoLSTM-hi-90
unique_id
H1749550.545288491.368347484.838226640.832520658.631592581.597534510.460632533.967041660.153076690.976379
H1750549.216736491.054932484.474243639.552002657.615967530.324402440.821899472.254272622.214539653.435913
H1751528.075989466.917053463.002289621.197205642.255005487.045593383.502045423.310974594.273071627.640320
H1752486.842255418.012115419.017242585.653259611.903809457.408081347.901093390.807495569.789062604.200012
H1753452.015930371.543884379.539215558.845154590.465942441.641418333.888611374.730621557.401978595.008484
StatsForecast.plot(Y_df, fcst_df, engine='matplotlib', max_insample_length=48 * 3, level=[80, 90])

The StatsForecast.plot allows for further customization. For example, plot the results of the different models and unique ids.

# Plot to unique_ids and some selected models
StatsForecast.plot(Y_df, fcst_df, models=["AutoLSTM"], unique_ids=["H107", "H104"], level=[80, 90], engine='matplotlib')

# Explore other models 
StatsForecast.plot(Y_df, fcst_df, models=["AutoNHITS"], unique_ids=["H10", "H105"], level=[80, 90], engine='matplotlib')

5. Evaluate the model’s performance

In previous steps, we’ve taken our historical data to predict the future. However, to asses its accuracy we would also like to know how the model would have performed in the past. To assess the accuracy and robustness of your models on your data perform Cross-Validation.

With time series data, Cross Validation is done by defining a sliding window across the historical data and predicting the period following it. This form of cross-validation allows us to arrive at a better estimation of our model’s predictive abilities across a wider range of temporal instances while also keeping the data in the training set contiguous as is required by our models.

The following graph depicts such a Cross Validation Strategy:

Tip

Setting n_windows=1 mirrors a traditional train-test split with our historical data serving as the training set and the last 48 hours serving as the testing set.

The cross_validation method from the NeuralForecast class takes the following arguments.

  • df: training data frame

  • step_size (int): step size between each window. In other words: how often do you want to run the forecasting processes.

  • n_windows (int): number of windows used for cross validation. In other words: what number of forecasting processes in the past do you want to evaluate.

from neuralforecast.auto import AutoNHITS, AutoLSTM
config_nhits = {
    "input_size": tune.choice([48, 48*2, 48*3]),              # Length of input window
    "start_padding_enabled": True,
    "n_blocks": 5*[1],                                              # Length of input window
    "mlp_units": 5 * [[64, 64]],                                  # Length of input window
    "n_pool_kernel_size": tune.choice([5*[1], 5*[2], 5*[4],         
                                      [8, 4, 2, 1, 1]]),            # MaxPooling Kernel size
    "n_freq_downsample": tune.choice([[8, 4, 2, 1, 1],
                                      [1, 1, 1, 1, 1]]),            # Interpolation expressivity ratios
    "learning_rate": tune.loguniform(1e-4, 1e-2),                   # Initial Learning rate
    "scaler_type": tune.choice([None]),                             # Scaler type
    "max_steps": tune.choice([1000]),                               # Max number of training iterations
    "batch_size": tune.choice([1, 4, 10]),                          # Number of series in batch
    "windows_batch_size": tune.choice([128, 256, 512]),             # Number of windows in batch
    "random_seed": tune.randint(1, 20),                             # Random seed
}

config_lstm = {
    "input_size": tune.choice([48, 48*2, 48*3]),              # Length of input window
    "encoder_hidden_size": tune.choice([64, 128]),            # Hidden size of LSTM cells
    "encoder_n_layers": tune.choice([2,4]),                   # Number of layers in LSTM
    "learning_rate": tune.loguniform(1e-4, 1e-2),             # Initial Learning rate
    "scaler_type": tune.choice(['robust']),                   # Scaler type
    "max_steps": tune.choice([500, 1000]),                    # Max number of training iterations
    "batch_size": tune.choice([1, 4]),                        # Number of series in batch
    "random_seed": tune.randint(1, 20),                       # Random seed
}
nf = NeuralForecast(
    models=[
        AutoNHITS(h=48, config=config_nhits, loss=MQLoss(), num_samples=5),
        AutoLSTM(h=48, config=config_lstm, loss=MQLoss(), num_samples=2), 
    ],
    freq='H'
)
cv_df = nf.cross_validation(Y_df, n_windows=2)
Global seed set to 4
Global seed set to 19

The cv_df object is a new data frame that includes the following columns:

  • unique_id: identifies each time series
  • ds: datestamp or temporal index
  • cutoff: the last datestamp or temporal index for the n_windows. If n_windows=1, then one unique cuttoff value, if n_windows=2 then two unique cutoff values.
  • y: true value
  • "model": columns with the model’s name and fitted value.
cv_df.columns = cv_df.columns.str.replace('-median', '')
cv_df.head()
unique_iddscutoffAutoNHITSAutoNHITS-lo-90AutoNHITS-lo-80AutoNHITS-hi-80AutoNHITS-hi-90AutoLSTMAutoLSTM-lo-90AutoLSTM-lo-80AutoLSTM-hi-80AutoLSTM-hi-90y
0H1700699646.881714601.402893626.471008672.432617683.847778633.707031365.139832407.289246871.474976925.476196684.0
1H1701699635.608643595.042908612.889771669.565979679.472900632.455017365.303131406.472992869.484985922.926514619.0
2H1702699592.663940564.124390566.502319648.286072647.859253633.002502365.147522407.174866868.677979925.269409565.0
3H1703699543.364563516.760742517.990234603.099182601.462280633.903503364.976746408.498779869.797180925.993164532.0
4H1704699498.051178461.069489474.206360540.752563555.169739634.015991363.384155408.305298870.154297920.329224495.0
for cutoff in cv_df['cutoff'].unique():
    StatsForecast.plot(
        Y_df, 
        cv_df.query('cutoff == @cutoff').drop(columns=['y', 'cutoff']), 
        max_insample_length=48 * 4, 
        unique_ids=['H185'],
        engine='matplotlib'
    )

Now, let’s evaluate the models’ performance.

from datasetsforecast.losses import mse, mae, rmse
from datasetsforecast.evaluation import accuracy

Warning

You can also use Mean Average Percentage Error (MAPE), however for granular forecasts, MAPE values are extremely hard to judge and not useful to assess forecasting quality.

Create the data frame with the results of the evaluation of your cross-validation data frame using a Mean Squared Error metric.

evaluation_df = accuracy(cv_df, [mse, mae, rmse], agg_by=['unique_id'])
evaluation_df['best_model'] = evaluation_df.drop(columns=['metric', 'unique_id']).idxmin(axis=1)
evaluation_df.head()
metricunique_idAutoNHITSAutoLSTMbest_model
0maeH138.259457131.158150AutoNHITS
1maeH1014.04490032.972164AutoNHITS
2maeH100254.464978281.836064AutoNHITS
3maeH101257.810841148.341771AutoLSTM
4maeH102176.114826472.413350AutoNHITS

Create a summary table with a model column and the number of series where that model performs best.

summary_df = evaluation_df.groupby(['metric', 'best_model']).size().sort_values().to_frame()

summary_df = summary_df.reset_index()
summary_df.columns = ['metric', 'model', 'nr. of unique_ids']
summary_df
metricmodelnr. of unique_ids
0maeAutoLSTM1
1mseAutoLSTM1
2rmseAutoLSTM1
3maeAutoNHITS9
4mseAutoNHITS9
5rmseAutoNHITS9
summary_df.query('metric == "mse"')
metricmodelnr. of unique_ids
1mseAutoLSTM1
4mseAutoNHITS9

You can further explore your results by plotting the unique_ids where a specific model wins.

nhits_ids = evaluation_df.query('best_model == "AutoNHITS" and metric == "mse"')['unique_id'].unique()

StatsForecast.plot(Y_df, fcst_df, unique_ids=nhits_ids, engine='matplotlib')

6. Select the best model for every unique series

Define a utility function that takes your forecast’s data frame with the predictions and the evaluation data frame and returns a data frame with the best possible forecast for every unique_id.

def get_best_model_forecast(forecasts_df, evaluation_df, metric):
    df = forecasts_df.set_index('ds', append=True).stack().to_frame().reset_index(level=2) # Wide to long 
    df.columns = ['model', 'best_model_forecast'] 
    df = df.join(evaluation_df.query('metric == @metric').set_index('unique_id')[['best_model']])
    df = df.query('model.str.replace("-lo-90|-hi-90", "", regex=True) == best_model').copy()
    df.loc[:, 'model'] = [model.replace(bm, 'best_model') for model, bm in zip(df['model'], df['best_model'])]
    df = df.drop(columns='best_model').set_index('model', append=True).unstack()
    df.columns = df.columns.droplevel()
    df = df.reset_index(level=1)
    return df

Create your production-ready data frame with the best forecast for every unique_id.

prod_forecasts_df = get_best_model_forecast(fcst_df, evaluation_df, metric='mse')

prod_forecasts_df.head()
modeldsbest_modelbest_model-hi-90best_model-lo-90
unique_id
H1749550.545288658.631592491.368347
H1750549.216736657.615967491.054932
H1751528.075989642.255005466.917053
H1752486.842255611.903809418.012115
H1753452.015930590.465942371.543884

Plot the results.

StatsForecast.plot(Y_df, prod_forecasts_df, level=[90], engine='matplotlib')