NeuralForecast contains a collection NumPy loss functions aimed to be used during the models’ evaluation.
*Mean Absolute Error Calculates Mean Absolute Error between
y
and y_hat
. MAE measures the
relative prediction accuracy of a forecasting method by calculating the
deviation of the prediction and the true value at a given time and
averages these devations over the length of the series.
Parameters:y
: numpy array, Actual values.y_hat
: numpy
array, Predicted values.mask
: numpy array, Specifies date stamps
per serie to consider in loss.mae
:
numpy array, (single value).*
*Mean Squared Error Calculates Mean Squared Error between
y
and y_hat
. MSE measures the
relative prediction accuracy of a forecasting method by calculating the
squared deviation of the prediction and the true value at a given time,
and averages these devations over the length of the series.
Parameters:y
: numpy array, Actual values.y_hat
: numpy
array, Predicted values.mask
: numpy array, Specifies date stamps
per serie to consider in loss.mse
:
numpy array, (single value).*
*Root Mean Squared Error Calculates Root Mean Squared Error between
y
and y_hat
. RMSE
measures the relative prediction accuracy of a forecasting method by
calculating the squared deviation of the prediction and the observed
value at a given time and averages these devations over the length of
the series. Finally the RMSE will be in the same scale as the original
time series so its comparison with other series is possible only if they
share a common scale. RMSE has a direct connection to the L2 norm.
Parameters:y
: numpy array, Actual values.y_hat
: numpy
array, Predicted values.mask
: numpy array, Specifies date stamps
per serie to consider in loss.rmse
:
numpy array, (single value).*
*Mean Absolute Percentage Error Calculates Mean Absolute Percentage Error between
y
and y_hat
. MAPE
measures the relative prediction accuracy of a forecasting method by
calculating the percentual deviation of the prediction and the observed
value at a given time and averages these devations over the length of
the series. The closer to zero an observed value is, the higher penalty
MAPE loss assigns to the corresponding error.
Parameters:y
: numpy array, Actual values.y_hat
: numpy
array, Predicted values.mask
: numpy array, Specifies date stamps
per serie to consider in loss.mape
:
numpy array, (single value).*
*Symmetric Mean Absolute Percentage Error Calculates Symmetric Mean Absolute Percentage Error between
y
and
y_hat
. SMAPE measures the relative prediction accuracy of a
forecasting method by calculating the relative deviation of the
prediction and the observed value scaled by the sum of the absolute
values for the prediction and observed value at a given time, then
averages these devations over the length of the series. This allows the
SMAPE to have bounds between 0% and 200% which is desirable compared to
normal MAPE that may be undetermined when the target is zero.
Parameters:y
: numpy array, Actual values.y_hat
: numpy
array, Predicted values.mask
: numpy array, Specifies date stamps
per serie to consider in loss.smape
:
numpy array, (single value).
References:*Mean Absolute Scaled Error Calculates the Mean Absolute Scaled Error between
y
and y_hat
. MASE measures the relative prediction accuracy
of a forecasting method by comparinng the mean absolute errors of the
prediction and the observed value against the mean absolute errors of
the seasonal naive model. The MASE partially composed the Overall
Weighted Average (OWA), used in the M4 Competition.
Parameters:y
: numpy array, (batch_size, output_size), Actual
values.y_hat
: numpy array, (batch_size, output_size)), Predicted
values.y_insample
: numpy array, (batch_size, input_size), Actual
insample Seasonal Naive predictions.seasonality
: int. Main
frequency of the time series; Hourly 24, Daily 7, Weekly 52, Monthly 12,
Quarterly 4, Yearly 1.mask
: numpy array, Specifies date stamps per serie to consider in
loss.mase
:
numpy array, (single value).
References:*RMAE Calculates Relative Mean Absolute Error (RMAE) between two sets of forecasts (from two different forecasting methods). A number smaller than one implies that the forecast in the numerator is better than the forecast in the denominator. Parameters:
y
: numpy array, observed values.y_hat1
:
numpy array. Predicted values of first model.y_hat2
: numpy array.
Predicted values of baseline model.weights
: numpy array,
optional. Weights for weighted average.axis
: None or int,
optional.Axis or axes along which to average a.rmae
:
numpy array or double.
References:*Quantile Loss Computes the quantile loss between
y
and y_hat
. QL measures the
deviation of a quantile forecast. By weighting the absolute deviation in
a non symmetric way, the loss pays more attention to under or over
estimation. A common value for q is 0.5 for the deviation from the
median (Pinball loss).
Parameters:y
: numpy array, Actual values.y_hat
: numpy
array, Predicted values.q
: float, between 0 and 1. The slope of
the quantile loss, in the context of quantile regression, the q
determines the conditional quantile level.mask
: numpy array,
Specifies date stamps per serie to consider in loss.quantile_loss
:
numpy array, (single value).
References:*Multi-Quantile loss Calculates the Multi-Quantile loss (MQL) between
y
and y_hat
. MQL
calculates the average multi-quantile Loss for a given set of quantiles,
based on the absolute difference between predicted quantiles and
observed values.
The limit behavior of MQL allows to measure the accuracy of a full
predictive distribution with the continuous
ranked probability score (CRPS). This can be achieved through a
numerical integration technique, that discretizes the quantiles and
treats the CRPS integral with a left Riemann approximation, averaging
over uniformly distanced quantiles.
Parameters:y
: numpy array, Actual values.y_hat
: numpy
array, Predicted values.quantiles
: numpy array,(n_quantiles).
Quantiles to estimate from the distribution of y.mask
: numpy
array, Specifies date stamps per serie to consider in loss.mqloss
:
numpy array, (single value).
References: