The BaseRecurrent class contains standard methods shared across recurrent neural networks; these models possess the ability to process variable-length sequences of inputs through their internal memory states. The class is represented by LSTM, GRU, and RNN, along with other more sophisticated architectures like MQCNN.

The standard methods include TemporalNorm preprocessing, optimization utilities like parameter initialization, training_step, validation_step, and shared fit and predict methods.These shared methods enable all the neuralforecast.models compatibility with the core.NeuralForecast wrapper class.


BaseRecurrent

 BaseRecurrent (h, input_size, inference_input_size, loss, valid_loss,
                learning_rate, max_steps, val_check_steps, batch_size,
                valid_batch_size, scaler_type='robust', num_lr_decays=0,
                early_stop_patience_steps=-1, futr_exog_list=None,
                hist_exog_list=None, stat_exog_list=None,
                num_workers_loader=0, drop_last_loader=False,
                random_seed=1, alias=None, optimizer=None,
                optimizer_kwargs=None, lr_scheduler=None,
                lr_scheduler_kwargs=None, dataloader_kwargs=None,
                **trainer_kwargs)

*Base Recurrent

Base class for all recurrent-based models. The forecasts are produced sequentially between windows.

This class implements the basic functionality for all windows-based models, including: - PyTorch Lightning’s methods training_step, validation_step, predict_step.
- fit and predict methods used by NeuralForecast.core class.
- sampling and wrangling methods to sequential windows.
*


BaseRecurrent.fit

 BaseRecurrent.fit (dataset, val_size=0, test_size=0, random_seed=None,
                    distributed_config=None)

*Fit.

The fit method, optimizes the neural network’s weights using the initialization parameters (learning_rate, batch_size, …) and the loss function as defined during the initialization. Within fit we use a PyTorch Lightning Trainer that inherits the initialization’s self.trainer_kwargs, to customize its inputs, see PL’s trainer arguments.

The method is designed to be compatible with SKLearn-like classes and in particular to be compatible with the StatsForecast library.

By default the model is not saving training checkpoints to protect disk memory, to get them change enable_checkpointing=True in __init__.

Parameters:
dataset: NeuralForecast’s TimeSeriesDataset, see documentation.
val_size: int, validation size for temporal cross-validation.
test_size: int, test size for temporal cross-validation.
random_seed: int=None, random_seed for pytorch initializer and numpy generators, overwrites model.__init__’s.
*


BaseRecurrent.predict

 BaseRecurrent.predict (dataset, step_size=1, random_seed=None,
                        **data_module_kwargs)

*Predict.

Neural network prediction with PL’s Trainer execution of predict_step.

Parameters:
dataset: NeuralForecast’s TimeSeriesDataset, see documentation.
step_size: int=1, Step size between each window.
random_seed: int=None, random_seed for pytorch initializer and numpy generators, overwrites model.__init__’s.
**data_module_kwargs: PL’s TimeSeriesDataModule args, see documentation.*