Uncertainty quantification with Conformal Prediction
Tutorial on how to train neuralforecast models and obtain prediction intervals using the conformal prediction methods
Conformal prediction uses cross-validation on a model trained with a point loss function to generate prediction intervals. No additional training is needed, and the model is treated as a black box. The approach is compatible with any model.
In this notebook, we demonstrate how to obtain prediction intervals using conformal prediction.
Load libraries
Data
We use the AirPassengers dataset for the demonstration of conformal prediction.
Model training
We now train a NHITS model on the above dataset. To support conformal
predictions, we must first instantiate the
PredictionIntervals
class and pass this to the fit
method. By default,
PredictionIntervals
class employs n_windows=2
for the corss-validation during the
computation of conformity scores. We also train a MLP model using
DistributionLoss to demonstate the difference between conformal
prediction and quantiled outputs.
By default,
PredictionIntervals
class employs method=conformal_distribution
for the conformal
predictions, but it also supports method=conformal_error
. The
conformal_distribution
method calculates forecast paths using the
absolute errors and based on them calculates quantiles. The
conformal_error
method calculates quantiles directly from errors.
We consider two models below:
- A model trained using a point loss function
(
MAE
), where we quantify the uncertainty using conformal prediction. This case is labeled withNHITS
. - A model trained using a
DistributionLoss('Normal')
, where we quantify the uncertainty by training the model to fit the parameters of a Normal distribution. This case is labeled withNHITS1
.
Forecasting
To generate conformal intervals, we specify the desired levels in the
predict
method.