TimeGPT accepts pandas and polars dataframes in long format with the following necessary columns:

  • ds (timestamp): timestamp in format YYYY-MM-DD or YYYY-MM-DD HH:MM:SS.
  • y (numeric): The target variable to forecast.

(Optionally, you can also pass a DataFrame without the ds column as long as it has DatetimeIndex)

TimeGPT also works with distributed dataframes like dask, spark and ray.

You can also include exogenous features in the DataFrame as additional columns. For more information, follow this tutorial.

Below is an example of a valid input dataframe for TimeGPT.

import pandas as pd 

df = pd.read_csv('https://raw.githubusercontent.com/Nixtla/transfer-learning-time-series/main/datasets/air_passengers.csv')
df.head()
timestampvalue
01949-01-01112
11949-02-01118
21949-03-01132
31949-04-01129
41949-05-01121

Note that in this example, the ds column is named timestamp and the y column is named value. You can either:

  1. Rename the columns to ds and y, respectively, or

  2. Keep the current column names and specify them when using any method from the NixtlaClient class with the time_col and target_col arguments.

For example, when using the forecast method from the NixtlaClient class, you must instantiate the class and then specify the columns names as follows.

from nixtla import NixtlaClient

nixtla_client = NixtlaClient(
    api_key = 'my_api_key_provided_by_nixtla'
)
fcst = nixtla_client.forecast(df=df, h=12, time_col='timestamp', target_col='value')
fcst.head()
INFO:nixtla.nixtla_client:Validating inputs...
INFO:nixtla.nixtla_client:Preprocessing dataframes...
INFO:nixtla.nixtla_client:Inferred freq: MS
INFO:nixtla.nixtla_client:Calling Forecast Endpoint...
timestampTimeGPT
01961-01-01437.837921
11961-02-01426.062714
21961-03-01463.116547
31961-04-01478.244507
41961-05-01505.646484

In this example, the NixtlaClient is infereing the frequency, but you can explicitly specify it with the freq argument.

To learn more about how to instantiate the NixtlaClient class, refer to the TimeGPT Quickstart

Multiple Series

If you’re working with multiple time series, make sure that each series has a unique identifier. You can name this column unique_id or specify its name using the id_col argument when calling any method from the NixtlaClient class. This column should be a string, integer, or category.

In this example, we have five series representing hourly electricity prices in five different markets. The columns already have the default names, so it’s unnecessary to specify the id_col, time_col, or target_col arguments. If your columns have different names, specify these arguments as required.

df = pd.read_csv('https://raw.githubusercontent.com/Nixtla/transfer-learning-time-series/main/datasets/electricity-short.csv')
df.head()
unique_iddsy
0BE2016-10-22 00:00:0070.00
1BE2016-10-22 01:00:0037.10
2BE2016-10-22 02:00:0037.10
3BE2016-10-22 03:00:0044.75
4BE2016-10-22 04:00:0037.10
fcst = nixtla_client.forecast(df=df, h=24) # use id_col, time_col and target_col here if needed. 
fcst.head()
INFO:nixtla.nixtla_client:Validating inputs...
INFO:nixtla.nixtla_client:Preprocessing dataframes...
INFO:nixtla.nixtla_client:Inferred freq: H
INFO:nixtla.nixtla_client:Calling Forecast Endpoint...
unique_iddsTimeGPT
0BE2016-12-31 00:00:0045.190453
1BE2016-12-31 01:00:0043.244446
2BE2016-12-31 02:00:0041.958389
3BE2016-12-31 03:00:0039.796486
4BE2016-12-31 04:00:0039.204536

When working with a large number of time series, consider using a distributed computing framework to handle the data efficiently. TimeGPT supports frameworks such as Spark, Dask, and Ray.

Exogenous Variables

TimeGPT also accepts exogenous variables. You can add exogenous variables to your dataframe by including additional columns after the y column.

df = pd.read_csv('https://raw.githubusercontent.com/Nixtla/transfer-learning-time-series/main/datasets/electricity-short-with-ex-vars.csv')
df.head()
unique_iddsyExogenous1Exogenous2day_0day_1day_2day_3day_4day_5day_6
0BE2016-10-22 00:00:0070.0049593.057253.00.00.00.00.00.01.00.0
1BE2016-10-22 01:00:0037.1046073.051887.00.00.00.00.00.01.00.0
2BE2016-10-22 02:00:0037.1044927.051896.00.00.00.00.00.01.00.0
3BE2016-10-22 03:00:0044.7544483.048428.00.00.00.00.00.01.00.0
4BE2016-10-22 04:00:0037.1044338.046721.00.00.00.00.00.01.00.0

When using exogenous variables, you also need to provide its future values.

future_ex_vars_df = pd.read_csv('https://raw.githubusercontent.com/Nixtla/transfer-learning-time-series/main/datasets/electricity-short-future-ex-vars.csv')
future_ex_vars_df.head()
unique_iddsExogenous1Exogenous2day_0day_1day_2day_3day_4day_5day_6
0BE2016-12-31 00:00:0064108.070318.00.00.00.00.00.01.00.0
1BE2016-12-31 01:00:0062492.067898.00.00.00.00.00.01.00.0
2BE2016-12-31 02:00:0061571.068379.00.00.00.00.00.01.00.0
3BE2016-12-31 03:00:0060381.064972.00.00.00.00.00.01.00.0
4BE2016-12-31 04:00:0060298.062900.00.00.00.00.00.01.00.0
fcst = nixtla_client.forecast(df=df, X_df=future_ex_vars_df, h=24)
fcst.head()
INFO:nixtla.nixtla_client:Validating inputs...
INFO:nixtla.nixtla_client:Preprocessing dataframes...
INFO:nixtla.nixtla_client:Inferred freq: H
INFO:nixtla.nixtla_client:Using the following exogenous variables: Exogenous1, Exogenous2, day_0, day_1, day_2, day_3, day_4, day_5, day_6
INFO:nixtla.nixtla_client:Calling Forecast Endpoint...
unique_iddsTimeGPT
0BE2016-12-31 00:00:0074.540773
1BE2016-12-31 01:00:0043.344289
2BE2016-12-31 02:00:0044.429220
3BE2016-12-31 03:00:0038.094395
4BE2016-12-31 04:00:0037.389141

To learn more about how to use exogenous variables with TimeGPT, consult the Exogenous Variables tutorial.

Important Considerations

When using TimeGPT, the data cannot contain missing values. This means that for every series, there should be no gaps in the timestamps and no missing values in the target variable.

For more, please refer to the tutorial on Dealing with Missing Values in TimeGPT.

Minimum Data Requirements (for AzureAI)

TimeGPT currently supports any amount of data for generating point forecasts. That is, the minimum size per series to expect results from this call nixtla_client.forecast(df=df, h=h, freq=freq) is one, regardless of the frequency.

For Azure AI, when using the arguments level, finetune_steps, X_df (exogenous variables), or add_history, the API requires a minimum number of data points depending on the frequency. Here are the minimum sizes for each frequency:

FrequencyMinimum Size
Hourly and subhourly (e.g., “H”, “min”, “15T”)1008
Daily (“D”)300
Weekly (e.g., “W-MON”,…, “W-SUN”)64
Monthly and other frequencies (e.g., “M”, “MS”, “Y”)48

For cross-validation, you need to consider these numbers as well as the forecast horizon (h), the number of windows (n_windows), and the gap between windows (step_size). Thus, the minimum number of observations per series in this case would be determined by the following relationship:

Minimum number described previously + h + step_size + (n_windows - 1)