module neuralforecast.models.timellm
Global Variables
- IS_TRANSFORMERS_INSTALLED
class ReplicationPad1d
ReplicationPad1d
method __init__
method forward
class TokenEmbedding
TokenEmbedding
method __init__
method forward
class PatchEmbedding
PatchEmbedding
method __init__
method forward
class FlattenHead
FlattenHead
method __init__
method forward
class ReprogrammingLayer
ReprogrammingLayer
method __init__
method forward
method reprogramming
class TimeLLM
TimeLLM
Time-LLM is a reprogramming framework to repurpose an off-the-shelf LLM for time series forecasting.
It trains a reprogramming layer that translates the observed series into a language task. This is fed to the LLM and an output projection layer translates the output back to numerical predictions.
Args:
h(int): Forecast horizon.input_size(int): autorregresive inputs size, y=[1,2,3,4] input_size=2 -> y_[t-2:t]=[1,2].patch_len(int): length of patch. Default: 16stride(int): stride of patch. Default: 8d_ff(int): dimension of fcn. Default: 128top_k(int): top tokens to consider. Default: 5d_llm(int): hidden dimension of LLM. Default: 768 # LLama7b:4096; GPT2-small:768; BERT-base:768d_model(int): dimension of model. Default: 32n_heads(int): number of heads in attention layer. Default: 8enc_in(int): encoder input size. Default: 7dec_in(int): decoder input size. Default: 7llm(str): Path to pretrained LLM model to use. If not specified, it will use GPT-2 from https://huggingface.co/openai-community/gpt2”llm_config(dict): Deprecated, configuration of LLM. If not specified, it will use the configuration of GPT-2 from https://huggingface.co/openai-community/gpt2”llm_tokenizer(str): Deprecated, tokenizer of LLM. If not specified, it will use the GPT-2 tokenizer from https://huggingface.co/openai-community/gpt2”llm_num_hidden_layers(int): hidden layers in LLM. Default: 32llm_output_attention(bool): whether to output attention in encoder. Default: Truellm_output_hidden_states(bool): whether to output hidden states. Default: Trueprompt_prefix(str): prompt to inform the LLM about the dataset. Default: Nonedropout(float): dropout rate. Default: 0.1stat_exog_list(list): static exogenous columns.hist_exog_list(list): historic exogenous columns.futr_exog_list(list): future exogenous columns.loss(PyTorch module): instantiated train loss class from losses collection.valid_loss(PyTorch module): instantiated valid loss class from losses collection.learning_rate(float): Learning rate between (0, 1). Default: 1e-3max_steps(int): maximum number of training steps. Default: 1000val_check_steps(int): Number of training steps between every validation loss check. Default: 100batch_size(int): number of different series in each batch. Default: 32valid_batch_size(int): number of different series in each validation and test batch, if None uses batch_size. Default: Nonewindows_batch_size(int): number of windows to sample in each training batch, default uses all. Default: 1024inference_windows_batch_size(int): number of windows to sample in each inference batch. Default: 1024start_padding_enabled(bool): if True, the model will pad the time series with zeros at the beginning, by input size. Default: Falsetraining_data_availability_threshold(Union[float, List[float]]): minimum fraction of valid data points required for training windows. Single float applies to both insample and outsample; list of two floats specifies [insample_fraction, outsample_fraction]. Default 0.0 allows windows with only 1 valid data point (current behavior).step_size(int): step size between each window of temporal data. Default: 1num_lr_decays(int): Number of learning rate decays, evenly distributed across max_steps. Default: -1early_stop_patience_steps(int): Number of validation iterations before early stopping. Default: -1scaler_type(str): type of scaler for temporal inputs normalization see temporal scalers. Default: ‘identity’random_seed(int): random_seed for pytorch initializer and numpy generators. Default: 1drop_last_loader(bool): if TrueTimeSeriesDataLoaderdrops last non-full batch. Default: Falsealias(str): optional, Custom name of the model.optimizer(Subclass of ‘torch.optim.Optimizer’): optional, user specified optimizer instead of the default choice (Adam).optimizer_kwargs(dict): optional, list of parameters used by the user specifiedoptimizer.lr_scheduler(Subclass of ‘torch.optim.lr_scheduler.LRScheduler’): optional, user specified lr_scheduler instead of the default choice (StepLR).lr_scheduler_kwargs(dict): optional, list of parameters used by the user specifiedlr_scheduler.dataloader_kwargs(dict): optional, list of parameters passed into the PyTorch Lightning dataloader by theTimeSeriesDataLoader.**trainer_kwargs (int): keyword trainer arguments inherited from PyTorch Lighning’s trainer.
method __init__
property automatic_optimization
If set toFalse you are responsible for calling .backward(), .step(), .zero_grad().
property current_epoch
The current epoch in theTrainer, or 0 if not attached.
property device
property device_mesh
Strategies likeModelParallelStrategy will create a device mesh that can be accessed in the :meth:~pytorch_lightning.core.hooks.ModelHooks.configure_model hook to parallelize the LightningModule.
property dtype
property example_input_array
The example input array is a specification of what the module can consume in the :meth:forward method. The return type is interpreted as follows:
- Single tensor: It is assumed the model takes a single argument, i.e.,
model.forward(model.example_input_array) - Tuple: The input array should be interpreted as a sequence of positional arguments, i.e.,
model.forward(*model.example_input_array) - Dict: The input array represents named keyword arguments, i.e.,
model.forward(**model.example_input_array)
property fabric
property global_rank
The index of the current process across all nodes and devices.property global_step
Total training batches seen across all epochs. If no Trainer is attached, this property is 0.property hparams
The collection of hyperparameters saved with :meth:save_hyperparameters. It is mutable by the user. For the frozen set of initial hyperparameters, use :attr:hparams_initial.
Returns:
Mutable hyperparameters dictionary
property hparams_initial
The collection of hyperparameters saved with :meth:save_hyperparameters. These contents are read-only. Manual updates to the saved hyperparameters can instead be performed through :attr:hparams.
Returns:
AttributeDict: immutable initial hyperparameters
property local_rank
The index of the current process within a single node.property logger
Reference to the logger object in the Trainer.property loggers
Reference to the list of loggers in the Trainer.property on_gpu
ReturnsTrue if this model is currently located on a GPU.
Useful to set flags around the LightningModule for different CPU vs GPU behavior.
property strict_loading
Determines how Lightning loads this model using.load_state_dict(..., strict=model.strict_loading).

