- Ming Jin, Shiyu Wang, Lintao Ma, Zhixuan Chu, James Y. Zhang, Xiaoming Shi, Pin-Yu Chen, Yuxuan Liang, Yuan-Fang Li, Shirui Pan, Qingsong Wen. “Time-LLM: Time Series Forecasting by Reprogramming Large Language Models”
1. Auxiliary Functions
source
ReprogrammingLayer
ReprogrammingLayer
source
FlattenHead
FlattenHead
source
PatchEmbedding
PatchEmbedding
source
TokenEmbedding
TokenEmbedding
source
ReplicationPad1d
ReplicationPad1d
2. Model
source
TimeLLM
*TimeLLM Time-LLM is a reprogramming framework to repurpose an off-the-shelf LLM for time series forecasting. It trains a reprogramming layer that translates the observed series into a language task. This is fed to the LLM and an output projection layer translates the output back to numerical predictions. Parameters:
h: int, Forecast horizon. input_size: int,
autorregresive inputs size, y=[1,2,3,4] input_size=2 ->
y_[t-2:t]=[1,2].patch_len: int=16, length of patch.stride: int=8, stride of patch.d_ff: int=128, dimension of
fcn.top_k: int=5, top tokens to consider.d_llm: int=768,
hidden dimension of LLM.# LLama7b:4096; GPT2-small:768; BERT-base:768
d_model: int=32, dimension of model.n_heads:
int=8, number of heads in attention layer.enc_in: int=7, encoder
input size.dec_in: int=7, decoder input size.llm = None,
Path to pretrained LLM model to use. If not specified, it will use GPT-2
from https://huggingface.co/openai-community/gpt2”llm_config =
Deprecated, configuration of LLM. If not specified, it will use the
configuration of GPT-2 from
https://huggingface.co/openai-community/gpt2”llm_tokenizer =
Deprecated, tokenizer of LLM. If not specified, it will use the GPT-2
tokenizer from https://huggingface.co/openai-community/gpt2”llm_num_hidden_layers = 32, hidden layers in LLM
llm_output_attention: bool = True, whether to output attention in
encoder.llm_output_hidden_states: bool = True, whether to output
hidden states.prompt_prefix: str=None, prompt to inform the LLM
about the dataset.dropout: float=0.1, dropout rate.stat_exog_list: str list, static exogenous columns.hist_exog_list: str list, historic exogenous columns.futr_exog_list: str list, future exogenous columns.loss:
PyTorch module, instantiated train loss class from losses
collection.valid_loss: PyTorch module=loss, instantiated valid loss class from
losses
collection.learning_rate: float=1e-3, Learning rate between (0, 1).max_steps: int=1000, maximum number of training steps.val_check_steps: int=100, Number of training steps between every
validation loss check.batch_size: int=32, number of different
series in each batch.valid_batch_size: int=None, number of
different series in each validation and test batch, if None uses
batch_size.windows_batch_size: int=1024, number of windows to
sample in each training batch, default uses all.inference_windows_batch_size: int=1024, number of windows to sample in
each inference batch.start_padding_enabled: bool=False, if True,
the model will pad the time series with zeros at the beginning, by input
size.step_size: int=1, step size between each window of temporal
data.num_lr_decays: int=-1, Number of learning rate decays,
evenly distributed across max_steps.early_stop_patience_steps:
int=-1, Number of validation iterations before early stopping.scaler_type: str=‘identity’, type of scaler for temporal inputs
normalization see temporal
scalers.random_seed: int, random_seed for pytorch initializer and numpy
generators.drop_last_loader: bool=False, if True
TimeSeriesDataLoader drops last non-full batch.alias: str,
optional, Custom name of the model.optimizer: Subclass of
‘torch.optim.Optimizer’, optional, user specified optimizer instead of
the default choice (Adam).optimizer_kwargs: dict, optional, list
of parameters used by the user specified optimizer.lr_scheduler: Subclass of ‘torch.optim.lr_scheduler.LRScheduler’,
optional, user specified lr_scheduler instead of the default choice
(StepLR).lr_scheduler_kwargs: dict, optional, list of parameters
used by the user specified lr_scheduler.dataloader_kwargs:
dict, optional, list of parameters passed into the PyTorch Lightning
dataloader by the TimeSeriesDataLoader. **trainer_kwargs: int,
keyword trainer arguments inherited from PyTorch Lighning’s
trainer.References:
-Ming Jin, Shiyu Wang, Lintao Ma, Zhixuan Chu, James Y. Zhang, Xiaoming Shi, Pin-Yu Chen, Yuxuan Liang, Yuan-Fang Li, Shirui Pan, Qingsong Wen. “Time-LLM: Time Series Forecasting by Reprogramming Large Language Models”*
TimeLLM.fit
*Fit. The
fit method, optimizes the neural network’s weights using the
initialization parameters (learning_rate, windows_batch_size, …) and
the loss function as defined during the initialization. Within fit
we use a PyTorch Lightning Trainer that inherits the initialization’s
self.trainer_kwargs, to customize its inputs, see PL’s trainer
arguments.
The method is designed to be compatible with SKLearn-like classes and in
particular to be compatible with the StatsForecast library.
By default the model is not saving training checkpoints to protect
disk memory, to get them change enable_checkpointing=True in
__init__.
Parameters:dataset: NeuralForecast’s
TimeSeriesDataset,
see
documentation.val_size: int, validation size for temporal cross-validation.random_seed: int=None, random_seed for pytorch initializer and numpy
generators, overwrites model.__init__’s.test_size: int, test
size for temporal cross-validation.*
TimeLLM.predict
*Predict. Neural network prediction with PL’s
Trainer execution of
predict_step.
Parameters:dataset: NeuralForecast’s
TimeSeriesDataset,
see
documentation.test_size: int=None, test size for temporal cross-validation.step_size: int=1, Step size between each window.random_seed:
int=None, random_seed for pytorch initializer and numpy generators,
overwrites model.__init__’s.quantiles: list of floats,
optional (default=None), target quantiles to predict. **data_module_kwargs: PL’s TimeSeriesDataModule args, see
documentation.*

