source

DistributedMLForecast

 DistributedMLForecast (models, freq:Union[int,str],
                        lags:Optional[Iterable[int]]=None, lag_transforms:
                        Optional[Dict[int,List[Union[Callable,Tuple[Callab
                        le,Any]]]]]=None, date_features:Optional[Iterable[
                        Union[str,Callable]]]=None, num_threads:int=1, tar
                        get_transforms:Optional[List[Union[mlforecast.targ
                        et_transforms.BaseTargetTransform,mlforecast.targe
                        t_transforms._BaseGroupedArrayTargetTransform]]]=N
                        one, engine=None,
                        num_partitions:Optional[int]=None)

Multi backend distributed pipeline


source

DistributedMLForecast.fit

 DistributedMLForecast.fit (df:~AnyDataFrame, id_col:str='unique_id',
                            time_col:str='ds', target_col:str='y',
                            static_features:Optional[List[str]]=None,
                            dropna:bool=True,
                            keep_last_n:Optional[int]=None)

Apply the feature engineering and train the models.

TypeDefaultDetails
dfAnyDataFrameSeries data in long format.
id_colstrunique_idColumn that identifies each serie.
time_colstrdsColumn that identifies each timestep, its values can be timestamps or integers.
target_colstryColumn that contains the target.
static_featuresOptionalNoneNames of the features that are static and will be repeated when forecasting.
dropnaboolTrueDrop rows with missing values produced by the transformations.
keep_last_nOptionalNoneKeep only these many records from each serie for the forecasting step. Can save time and memory if your features allow it.
ReturnsDistributedMLForecastForecast object with series values and trained models.

source

DistributedMLForecast.predict

 DistributedMLForecast.predict (h:int,
                                before_predict_callback:Optional[Callable]
                                =None, after_predict_callback:Optional[Cal
                                lable]=None, X_df:Optional[pandas.core.fra
                                me.DataFrame]=None,
                                new_df:Optional[~AnyDataFrame]=None)

Compute the predictions for the next horizon steps.

TypeDefaultDetails
hintForecast horizon.
before_predict_callbackOptionalNoneFunction to call on the features before computing the predictions.
This function will take the input dataframe that will be passed to the model for predicting and should return a dataframe with the same structure.
The series identifier is on the index.
after_predict_callbackOptionalNoneFunction to call on the predictions before updating the targets.
This function will take a pandas Series with the predictions and should return another one with the same structure.
The series identifier is on the index.
X_dfOptionalNoneDataframe with the future exogenous features. Should have the id column and the time column.
new_dfOptionalNoneSeries data of new observations for which forecasts are to be generated.
This dataframe should have the same structure as the one used to fit the model, including any features and time series data.
If new_df is not None, the method will generate forecasts for the new observations.
ReturnsAnyDataFramePredictions for each serie and timestep, with one column per model.

source

DistributedMLForecast.save

 DistributedMLForecast.save (path:str)

Save forecast object

TypeDetails
pathstrDirectory where artifacts will be stored.
ReturnsNone

source

DistributedMLForecast.load

 DistributedMLForecast.load (path:str, engine)

Load forecast object

TypeDetails
pathstrDirectory with saved artifacts.
enginefugue execution engineDask Client, Spark Session, etc to use for the distributed computation.
ReturnsDistributedMLForecast

source

DistributedMLForecast.update

 DistributedMLForecast.update (df:pandas.core.frame.DataFrame)

Update the values of the stored series.

TypeDetails
dfDataFrameDataframe with new observations.
ReturnsNone

source

DistributedMLForecast.to_local

 DistributedMLForecast.to_local ()

Convert this distributed forecast object into a local one

This pulls all the data from the remote machines, so you have to be sure that it fits in the scheduler/driver. If you’re not sure use the save method instead.


source

DistributedMLForecast.preprocess

 DistributedMLForecast.preprocess (df:~AnyDataFrame,
                                   id_col:str='unique_id',
                                   time_col:str='ds', target_col:str='y', 
                                   static_features:Optional[List[str]]=Non
                                   e, dropna:bool=True,
                                   keep_last_n:Optional[int]=None)

Add the features to data.

TypeDefaultDetails
dfAnyDataFrameSeries data in long format.
id_colstrunique_idColumn that identifies each serie.
time_colstrdsColumn that identifies each timestep, its values can be timestamps or integers.
target_colstryColumn that contains the target.
static_featuresOptionalNoneNames of the features that are static and will be repeated when forecasting.
dropnaboolTrueDrop rows with missing values produced by the transformations.
keep_last_nOptionalNoneKeep only these many records from each serie for the forecasting step. Can save time and memory if your features allow it.
ReturnsAnyDataFramedf with added features.

source

DistributedMLForecast.cross_validation

 DistributedMLForecast.cross_validation (df:~AnyDataFrame, n_windows:int,
                                         h:int, id_col:str='unique_id',
                                         time_col:str='ds',
                                         target_col:str='y',
                                         step_size:Optional[int]=None, sta
                                         tic_features:Optional[List[str]]=
                                         None, dropna:bool=True,
                                         keep_last_n:Optional[int]=None,
                                         refit:bool=True, before_predict_c
                                         allback:Optional[Callable]=None, 
                                         after_predict_callback:Optional[C
                                         allable]=None,
                                         input_size:Optional[int]=None)

Perform time series cross validation. Creates n_windows splits where each window has h test periods, trains the models, computes the predictions and merges the actuals.

TypeDefaultDetails
dfAnyDataFrameSeries data in long format.
n_windowsintNumber of windows to evaluate.
hintNumber of test periods in each window.
id_colstrunique_idColumn that identifies each serie.
time_colstrdsColumn that identifies each timestep, its values can be timestamps or integers.
target_colstryColumn that contains the target.
step_sizeOptionalNoneStep size between each cross validation window. If None it will be equal to h.
static_featuresOptionalNoneNames of the features that are static and will be repeated when forecasting.
dropnaboolTrueDrop rows with missing values produced by the transformations.
keep_last_nOptionalNoneKeep only these many records from each serie for the forecasting step. Can save time and memory if your features allow it.
refitboolTrueRetrain model for each cross validation window.
If False, the models are trained at the beginning and then used to predict each window.
before_predict_callbackOptionalNoneFunction to call on the features before computing the predictions.
This function will take the input dataframe that will be passed to the model for predicting and should return a dataframe with the same structure.
The series identifier is on the index.
after_predict_callbackOptionalNoneFunction to call on the predictions before updating the targets.
This function will take a pandas Series with the predictions and should return another one with the same structure.
The series identifier is on the index.
input_sizeOptionalNoneMaximum training samples per serie in each window. If None, will use an expanding window.
ReturnsAnyDataFramePredictions for each window with the series id, timestamp, target value and predictions from each model.