Skip to main content
This interface is only tested on Linux

DistributedMLForecast

DistributedMLForecast(
    models,
    freq,
    lags=None,
    lag_transforms=None,
    date_features=None,
    num_threads=1,
    target_transforms=None,
    engine=None,
    num_partitions=None,
    lag_transforms_namer=None,
)
Multi backend distributed pipeline Create distributed forecast object Parameters:
NameTypeDescriptionDefault
modelsregressor or list of regressorsModels that will be trained and used to compute the forecasts.required
freqstr or intPandas offset alias, e.g. ‘D’, ‘W-THU’ or integer denoting the frequency of the series. Defaults to None.required
lagslist of intLags of the target to use as features. Defaults to None.None
lag_transformsdict of int to list of functionsMapping of target lags to their transformations. Defaults to None.None
date_featureslist of str or callableFeatures computed from the dates. Can be pandas date attributes or functions that will take the dates as input. Defaults to None.None
num_threadsintNumber of threads to use when computing the features. Defaults to 1.1
target_transformslist of transformersTransformations that will be applied to the target before computing the features and restored after the forecasting step. Defaults to None.None
enginefugue execution engineDask Client, Spark Session, etc to use for the distributed computation. If None will infer depending on the input type. Defaults to None.None
num_partitionsnumber of data partitions to useIf None, the default partitions provided by the AnyDataFrame used by the fit and cross_validation methods will be used. If a Ray Dataset is provided and num_partitions is None, the partitioning will be done by the id_col. Defaults to None.None
lag_transforms_namercallableFunction that takes a transformation (either function or class), a lag and extra arguments and produces a name. Defaults to None.None

DistributedMLForecast.fit

fit(
    df,
    id_col="unique_id",
    time_col="ds",
    target_col="y",
    static_features=None,
    dropna=True,
    keep_last_n=None,
)
Apply the feature engineering and train the models. Parameters:
NameTypeDescriptionDefault
dfdask, spark or ray DataFrameSeries data in long format.required
id_colstrColumn that identifies each serie. Defaults to ‘unique_id’.‘unique_id’
time_colstrColumn that identifies each timestep, its values can be timestamps or integers. Defaults to ‘ds’.‘ds’
target_colstrColumn that contains the target. Defaults to ‘y’.‘y’
static_featureslist of strNames of the features that are static and will be repeated when forecasting. Defaults to None.None
dropnaboolDrop rows with missing values produced by the transformations. Defaults to True.True
keep_last_nintKeep only these many records from each serie for the forecasting step. Can save time and memory if your features allow it. Defaults to None.None
Returns:
TypeDescription
DistributedMLForecastForecast object with series values and trained models.

DistributedMLForecast.predict

predict(
    h,
    before_predict_callback=None,
    after_predict_callback=None,
    X_df=None,
    new_df=None,
    ids=None,
)
Compute the predictions for the next horizon steps. Parameters:
NameTypeDescriptionDefault
hintForecast horizon.required
before_predict_callbackcallableFunction to call on the features before computing the predictions. This function will take the input dataframe that will be passed to the model for predicting and should return a dataframe with the same structure. The series identifier is on the index. Defaults to None.None
after_predict_callbackcallableFunction to call on the predictions before updating the targets. This function will take a pandas Series with the predictions and should return another one with the same structure. The series identifier is on the index. Defaults to None.None
X_dfpandas DataFrameDataframe with the future exogenous features. Should have the id column and the time column. Defaults to None.None
new_dfdask or spark DataFrameSeries data of new observations for which forecasts are to be generated. This dataframe should have the same structure as the one used to fit the model, including any features and time series data. If new_df is not None, the method will generate forecasts for the new observations. Defaults to None.None
idslist of strList with subset of ids seen during training for which the forecasts should be computed. Defaults to None.None
Returns:
TypeDescription
dask, spark or ray DataFramePredictions for each serie and timestep, with one column per model.

DistributedMLForecast.save

save(path)
Save forecast object Parameters:
NameTypeDescriptionDefault
pathstrDirectory where artifacts will be stored.required

DistributedMLForecast.load

load(path, engine)
Load forecast object Parameters:
NameTypeDescriptionDefault
pathstrDirectory with saved artifacts.required
enginefugue execution engineDask Client, Spark Session, etc to use for the distributed computation.required

DistributedMLForecast.update

update(df)
Update the values of the stored series. Parameters:
NameTypeDescriptionDefault
dfpandas DataFrameDataframe with new observations.required

DistributedMLForecast.to_local

to_local()
Convert this distributed forecast object into a local one This pulls all the data from the remote machines, so you have to be sure that it fits in the scheduler/driver. If you’re not sure use the save method instead. Returns:
TypeDescription
MLForecastLocal forecast object.

DistributedMLForecast.preprocess

preprocess(
    df,
    id_col="unique_id",
    time_col="ds",
    target_col="y",
    static_features=None,
    dropna=True,
    keep_last_n=None,
)
Add the features to data. Parameters:
NameTypeDescriptionDefault
dfdask, spark or ray DataFrameSeries data in long format.required
id_colstrColumn that identifies each serie. Defaults to ‘unique_id’.‘unique_id’
time_colstrColumn that identifies each timestep, its values can be timestamps or integers. Defaults to ‘ds’.‘ds’
target_colstrColumn that contains the target. Defaults to ‘y’.‘y’
static_featureslist of strNames of the features that are static and will be repeated when forecasting. Defaults to None.None
dropnaboolDrop rows with missing values produced by the transformations. Defaults to True.True
keep_last_nintKeep only these many records from each serie for the forecasting step. Can save time and memory if your features allow it. Defaults to None.None
Returns:
TypeDescription
same type as dfdf with added features.

DistributedMLForecast.cross_validation

cross_validation(
    df,
    n_windows,
    h,
    id_col="unique_id",
    time_col="ds",
    target_col="y",
    step_size=None,
    static_features=None,
    dropna=True,
    keep_last_n=None,
    refit=True,
    before_predict_callback=None,
    after_predict_callback=None,
    input_size=None,
)
Perform time series cross validation. Creates n_windows splits where each window has h test periods, trains the models, computes the predictions and merges the actuals. Parameters:
NameTypeDescriptionDefault
dfdask, spark or ray DataFrameSeries data in long format.required
n_windowsintNumber of windows to evaluate.required
hintNumber of test periods in each window.required
id_colstrColumn that identifies each serie. Defaults to ‘unique_id’.‘unique_id’
time_colstrColumn that identifies each timestep, its values can be timestamps or integers. Defaults to ‘ds’.‘ds’
target_colstrColumn that contains the target. Defaults to ‘y’.‘y’
step_sizeintStep size between each cross validation window. If None it will be equal to h. Defaults to None.None
static_featureslist of strNames of the features that are static and will be repeated when forecasting. Defaults to None.None
dropnaboolDrop rows with missing values produced by the transformations. Defaults to True.True
keep_last_nintKeep only these many records from each serie for the forecasting step. Can save time and memory if your features allow it. Defaults to None.None
refitboolRetrain model for each cross validation window. If False, the models are trained at the beginning and then used to predict each window. Defaults to True.True
before_predict_callbackcallableFunction to call on the features before computing the predictions. This function will take the input dataframe that will be passed to the model for predicting and should return a dataframe with the same structure. The series identifier is on the index. Defaults to None.None
after_predict_callbackcallableFunction to call on the predictions before updating the targets. This function will take a pandas Series with the predictions and should return another one with the same structure. The series identifier is on the index. Defaults to None.None
input_sizeintMaximum training samples per serie in each window. If None, will use an expanding window. Defaults to None.None
Returns:
TypeDescription
dask, spark or ray DataFramePredictions for each window with the series id, timestamp, target value and predictions from each model.