module neuralforecast.core
Global Variables
- MODEL_FILENAME_DICT
class NeuralForecast
method __init__
core.StatsForecast class allows you to efficiently fit multiple NeuralForecast models for large sets of time series. It operates with pandas DataFrame df that identifies series and datestamps with the unique_id and ds columns. The y column denotes the target time series variable.
Args:
models(List[typing.Any]): Instantiatedneuralforecast.modelssee collection here.freq(str or int): Frequency of the data. Must be a valid pandas or polars offset alias, or an integer.local_scaler_type(str, optional): Scaler to apply per-serie to all features before fitting, which is inverted after predicting. Can be ‘standard’, ‘robust’, ‘robust-iqr’, ‘minmax’ or ‘boxcox’. Defaults to None.
NeuralForecast: Returns instantiatedNeuralForecastclass.
method cross_validation
core.NeuralForecast’s cross-validation efficiently fits a list of NeuralForecast models through multiple windows, in either chained or rolled manner.
Args:
df(pandas or polars DataFrame, optional): DataFrame with columns [unique_id,ds,y] and exogenous variables. If None, a previously stored dataset is required. Defaults to None.static_df(pandas or polars DataFrame, optional): DataFrame with columns [unique_id] and static exogenous. Defaults to None.n_windows(int): Number of windows used for cross validation. Defaults to 1.step_size(int): Step size between each window. Defaults to 1.val_size(int, optional): Length of validation size. If passed, setn_windows=None. Defaults to 0.test_size(int, optional): Length of test size. If passed, setn_windows=None. Defaults to None.use_init_models(bool, optional): Use initial model passed when object was instantiated. Defaults to False.verbose(bool): Print processing steps. Defaults to False.refit(bool or int): Retrain model for each cross validation window. If False, the models are trained at the beginning and then used to predict each window. If positive int, the models are retrained everyrefitwindows. Defaults to False.id_col(str): Column that identifies each serie. Defaults to ‘unique_id’.time_col(str): Column that identifies each timestep, its values can be timestamps or integers. Defaults to ‘ds’.target_col(str): Column that contains the target. Defaults to ‘y’.prediction_intervals(PredictionIntervals, optional): Configuration to calibrate prediction intervals (Conformal Prediction). Defaults to None.level(list of ints or floats, optional): Confidence levels between 0 and 100. Defaults to None.quantiles(list of floats, optional): Alternative to level, target quantiles to predict. Defaults to None.h(int, optional): Forecasting horizon. If None, uses the horizon of the fitted models. Defaults to None.data_kwargs(kwargs): Extra arguments to be passed to the dataset within each model.
fcsts_df(pandas or polars DataFrame): DataFrame with insamplemodelscolumns for point predictions and probabilistic predictions for all fittedmodels.
method explain
models to explain large set of time series from DataFrame df.
Args:
horizons(list of int, optional): List of horizons to explain. If None, all horizons are explained. Defaults to None.outputs(list of int, optional): List of outputs to explain for models with multiple outputs. Defaults to [0] (first output).explainer(str): Name of the explainer to use. Options are ‘IntegratedGradients’, ‘ShapleyValueSampling’, ‘InputXGradient’. Defaults to ‘IntegratedGradients’.df(pandas, polars or spark DataFrame, optional): DataFrame with columns [unique_id,ds,y] and exogenous variables. If a DataFrame is passed, it is used to generate forecasts. Defaults to None.static_df(pandas, polars or spark DataFrame, optional): DataFrame with columns [unique_id] and static exogenous. Defaults to None.futr_df(pandas, polars or spark DataFrame, optional): DataFrame with [unique_id,ds] columns anddf’s future exogenous. Defaults to None.h(int): The forecast horizon. Can be larger than the horizon set during training.verbose(bool): Print processing steps. Defaults to False.engine(spark session): Distributed engine for inference. Only used if df is a spark dataframe or if fit was called on a spark dataframe.level(list of ints or floats, optional): Confidence levels between 0 and 100. Defaults to None.quantiles(list of floats, optional): Alternative to level, target quantiles to predict. Defaults to None.data_kwargs(kwargs): Extra arguments to be passed to the dataset within each model.
fcsts_df(pandas or polars DataFrame): DataFrame with insamplemodelscolumns for point predictions and probabilistic predictions for all fittedmodels.explanations(dict): Dictionary of explanations for the predictions.
method fit
models to a large set of time series from DataFrame df. and store fitted models for later inspection.
Args:
df(pandas, polars or spark DataFrame, or a list of parquet files containing the series, optional): DataFrame with columns [unique_id,ds,y] and exogenous variables. If None, a previously stored dataset is required. Defaults to None.static_df(pandas, polars or spark DataFrame, optional): DataFrame with columns [unique_id] and static exogenous. Defaults to None.val_size(int, optional): Size of validation set. Defaults to 0.use_init_models(bool, optional): Use initial model passed when NeuralForecast object was instantiated. Defaults to False.verbose(bool): Print processing steps. Defaults to False.id_col(str): Column that identifies each serie. Defaults to ‘unique_id’.time_col(str): Column that identifies each timestep, its values can be timestamps or integers. Defaults to ‘ds’.target_col(str): Column that contains the target. Defaults to ‘y’.distributed_config(neuralforecast.DistributedConfig): Configuration to use for DDP training. Currently only spark is supported.prediction_intervals(PredictionIntervals, optional): Configuration to calibrate prediction intervals (Conformal Prediction). Defaults to None.
NeuralForecast: ReturnsNeuralForecastclass with fittedmodels.
method get_missing_future
futr_df.
Args:
futr_df(pandas or polars DataFrame): DataFrame with [unique_id,ds] columns anddf’s future exogenous.df(pandas or polars DataFrame, optional): DataFrame with columns [unique_id,ds,y] and exogenous variables. Only required if this is different than the one used in the fit step. Defaults to None.
method load
core.NeuralForecast’s method to load checkpoint from path.
Args:
path(str): Directory with stored artifacts.verbose(bool): Defaults to False.**kwargs: Additional keyword arguments to be passed to the functionload_from_checkpoint.
result(NeuralForecast): InstantiatedNeuralForecastclass.
method make_future_dataframe
df(pandas or polars DataFrame, optional): DataFrame with columns [unique_id,ds,y] and exogenous variables. Only required if this is different than the one used in the fit step. Defaults to None.
method predict
models to predict large set of time series from DataFrame df.
Args:
df(pandas, polars or spark DataFrame, optional): DataFrame with columns [unique_id,ds,y] and exogenous variables. If a DataFrame is passed, it is used to generate forecasts. Defaults to None.static_df(pandas, polars or spark DataFrame, optional): DataFrame with columns [unique_id] and static exogenous. Defaults to None.futr_df(pandas, polars or spark DataFrame, optional): DataFrame with [unique_id,ds] columns anddf’s future exogenous. Defaults to None.verbose(bool): Print processing steps. Defaults to False.engine(spark session): Distributed engine for inference. Only used if df is a spark dataframe or if fit was called on a spark dataframe.level(list of ints or floats, optional): Confidence levels between 0 and 100. Defaults to None.quantiles(list of floats, optional): Alternative to level, target quantiles to predict. Defaults to None.h(int, optional): Forecasting horizon. If None, uses the horizon of the fitted models. Defaults to None.data_kwargs(kwargs): Extra arguments to be passed to the dataset within each model.
fcsts_df(pandas or polars DataFrame): DataFrame with insamplemodelscolumns for point predictions and probabilistic predictions for all fittedmodels.
method predict_insample
core.NeuralForecast’s predict_insample uses stored fitted models to predict historic values of a time series from the stored dataframe.
Args:
step_size(int): Step size between each window. Defaults to 1.level(list of ints or floats, optional): Confidence levels between 0 and 100. Defaults to None.quantiles(list of floats, optional): Alternative to level, target quantiles to predict. Defaults to None.
fcsts_df(pandas.DataFrame): DataFrame with insample predictions for all fittedmodels.
method save
core.NeuralForecast’s method to save current status of models, dataset, and configuration. Note that by default the models are not saving training checkpoints to save disk memory, to get them change the individual model **trainer_kwargs to include enable_checkpointing=True.
Args:
path(str): Directory to save current status.model_index(list, optional): List to specify which models from list of self.models to save. Defaults to None.save_dataset(bool): Whether to save dataset or not. Defaults to True.overwrite(bool): Whether to overwrite files or not. Defaults to False.

