API Reference
SDK Reference
NixtlaClient
NixtlaClient (api_key:Optional[str]=None, base_url:Optional[str]=None, timeout:int=60, max_retries:int=6, retry_interval:int=10, max_wait_time:int=360)
Client to interact with the Nixtla API.
Type | Default | Details | |
---|---|---|---|
api_key | Optional | None | The authorization api_key interacts with the Nixtla API. If not provided, will use the NIXTLA_API_KEY environment variable. |
base_url | Optional | None | Custom base_url. If not provided, will use the NIXTLA_BASE_URL environment variable. |
timeout | int | 60 | Request timeout in seconds. Set this to None to disable it. |
max_retries | int | 6 | The maximum number of attempts to make when calling the API before giving up. It defines how many times the client will retry the API call if it fails. Default value is 6, indicating the client will attempt the API call up to 6 times in total |
retry_interval | int | 10 | The interval in seconds between consecutive retry attempts. This is the waiting period before the client tries to call the API again after a failed attempt. Default value is 10 seconds, meaning the client waits for 10 seconds between retries. |
max_wait_time | int | 360 | The maximum total time in seconds that the client will spend on all retry attempts before giving up. This sets an upper limit on the cumulative waiting time for all retry attempts. If this time is exceeded, the client will stop retrying and raise an exception. Default value is 360 seconds, meaning the client will cease retrying if the total time spent on retries exceeds 360 seconds. The client throws a ReadTimeout error after 60 seconds of inactivity. If you want to catch these errors, use max_wait_time >> 60. |
NixtlaClient.validate_api_key
NixtlaClient.validate_api_key (log:bool=True)
Returns True if your api_key is valid.
NixtlaClient.plot
NixtlaClient.plot (df:Union[pandas.core.frame.DataFrame,polars.dataframe. frame.DataFrame,NoneType]=None, forecasts_df:Union[pan das.core.frame.DataFrame,polars.dataframe.frame.DataFr ame,NoneType]=None, id_col:str='unique_id', time_col:str='ds', target_col:str='y', unique_ids:Unio n[List[str],NoneType,numpy.ndarray]=None, plot_random:bool=True, max_ids:int=8, models:Optional[List[str]]=None, level:Optional[List[float]]=None, max_insample_length:Optional[int]=None, plot_anomalies:bool=False, engine:Literal['matplotlib','plotly','plotly- resampler']='matplotlib', resampler_kwargs:Optional[Dict]=None, ax:Union[Forward Ref('plt.Axes'),numpy.ndarray,ForwardRef('plotly.graph _objects.Figure'),NoneType]=None)
Plot forecasts and insample values.
Type | Default | Details | |
---|---|---|---|
df | Union | None | The DataFrame on which the function will operate. Expected to contain at least the following columns: - time_col: Column name in df that contains the time indices of the time series. This is typically a datetimecolumn with regular intervals, e.g., hourly, daily, monthly data points. - target_col: Column name in df that contains the target variable of the time series, i.e., the variable wewish to predict or analyze. Additionally, you can pass multiple time series (stacked in the dataframe) considering an additional column: - id_col: Column name in df that identifies unique time series. Each unique value in this columncorresponds to a unique time series. |
forecasts_df | Union | None | DataFrame with columns [unique_id , ds ] and models. |
id_col | str | unique_id | Column that identifies each serie. |
time_col | str | ds | Column that identifies each timestep, its values can be timestamps or integers. |
target_col | str | y | Column that contains the target. |
unique_ids | Union | None | Time Series to plot. If None, time series are selected randomly. |
plot_random | bool | True | Select time series to plot randomly. |
max_ids | int | 8 | Maximum number of ids to plot. |
models | Optional | None | List of models to plot. |
level | Optional | None | List of prediction intervals to plot if paseed. |
max_insample_length | Optional | None | Max number of train/insample observations to be plotted. |
plot_anomalies | bool | False | Plot anomalies for each prediction interval. |
engine | Literal | matplotlib | Library used to plot. ‘matplotlib’, ‘plotly’ or ‘plotly-resampler’. |
resampler_kwargs | Optional | None | Kwargs to be passed to plotly-resampler constructor. For further custumization (“show_dash”) call the method, store the plotting object and add the extra arguments to its show_dash method. |
ax | Union | None | Object where plots will be added. |
NixtlaClient.forecast
NixtlaClient.forecast (df:~AnyDFType, h:typing.Annotated[int,Gt(gt=0)], freq:Optional[str]=None, id_col:str='unique_id', time_col:str='ds', target_col:str='y', X_df:Optional[~AnyDFType]=None, level:Optional[List[Union[int,float]]]=None, quantiles:Optional[List[float]]=None, finetune_steps:typing.Annotated[int,Ge(ge=0)]=0, f inetune_loss:Literal['default','mae','mse','rmse', 'mape','smape']='default', clean_ex_first:bool=True, validate_api_key:bool=False, add_history:bool=False, date_features:Union[bool,L ist[Union[str,Callable]]]=False, date_features_to_ one_hot:Union[bool,List[str]]=False, model:Literal ['azureai','timegpt-1','timegpt-1-long- horizon']='timegpt-1', num_partitions:Optional[Ann otated[int,Gt(gt=0)]]=None, feature_contributions:bool=False)
Forecast your time series using TimeGPT.
Type | Default | Details | |
---|---|---|---|
df | AnyDFType | The DataFrame on which the function will operate. Expected to contain at least the following columns: - time_col: Column name in df that contains the time indices of the time series. This is typically a datetimecolumn with regular intervals, e.g., hourly, daily, monthly data points. - target_col: Column name in df that contains the target variable of the time series, i.e., the variable wewish to predict or analyze. Additionally, you can pass multiple time series (stacked in the dataframe) considering an additional column: - id_col: Column name in df that identifies unique time series. Each unique value in this columncorresponds to a unique time series. | |
h | Annotated | Forecast horizon. | |
freq | Optional | None | Frequency of the data. By default, the freq will be inferred automatically. See pandas’ available frequencies. |
id_col | str | unique_id | Column that identifies each serie. |
time_col | str | ds | Column that identifies each timestep, its values can be timestamps or integers. |
target_col | str | y | Column that contains the target. |
X_df | Optional | None | DataFrame with [unique_id , ds ] columns and df ’s future exogenous. |
level | Optional | None | Confidence levels between 0 and 100 for prediction intervals. |
quantiles | Optional | None | Quantiles to forecast, list between (0, 1).level and quantiles should not be used simultaneously.The output dataframe will have the quantile columns formatted as TimeGPT-q-(100 * q) for each q. 100 * q represents percentiles but we choose this notation to avoid having dots in column names. |
finetune_steps | Annotated | 0 | Number of steps used to finetune learning TimeGPT in the new data. |
finetune_loss | Literal | default | Loss function to use for finetuning. Options are: default , mae , mse , rmse , mape , and smape . |
clean_ex_first | bool | True | Clean exogenous signal before making forecasts using TimeGPT. |
validate_api_key | bool | False | If True, validates api_key before sending requests. |
add_history | bool | False | Return fitted values of the model. |
date_features | Union | False | Features computed from the dates. Can be pandas date attributes or functions that will take the dates as input. If True automatically adds most used date features for the frequency of df . |
date_features_to_one_hot | Union | False | Apply one-hot encoding to these date features. If date_features=True , then all date features areone-hot encoded by default. |
model | Literal | timegpt-1 | Model to use as a string. Options are: timegpt-1 , and timegpt-1-long-horizon .We recommend using timegpt-1-long-horizon for forecastingif you want to predict more than one seasonal period given the frequency of your data. |
num_partitions | Optional | None | Number of partitions to use. If None, the number of partitions will be equal to the available parallel resources in distributed environments. |
feature_contributions | bool | False | |
Returns | AnyDFType | DataFrame with TimeGPT forecasts for point predictions and probabilistic predictions (if level is not None). |
NixtlaClient.cross_validation
NixtlaClient.cross_validation (df:~AnyDFType, h:typing.Annotated[int,Gt(gt=0)], freq:Optional[str]=None, id_col:str='unique_id', time_col:str='ds', target_col:str='y', level:Optional[List[Un ion[int,float]]]=None, quantiles:Optional[List[float]]=None, validate_api_key:bool=False, n_windows:typ ing.Annotated[int,Gt(gt=0)]=1, step_size:O ptional[Annotated[int,Gt(gt=0)]]=None, fin etune_steps:typing.Annotated[int,Ge(ge=0)] =0, finetune_loss:str='default', clean_ex_first:bool=True, date_features:Union[bool,List[str]]=False, date_features_to_one_hot:Union[bool,List[s tr]]=False, model:str='timegpt-1', num_par titions:Optional[Annotated[int,Gt(gt=0)]]= None)
Perform cross validation in your time series using TimeGPT.
Type | Default | Details | |
---|---|---|---|
df | AnyDFType | The DataFrame on which the function will operate. Expected to contain at least the following columns: - time_col: Column name in df that contains the time indices of the time series. This is typically a datetimecolumn with regular intervals, e.g., hourly, daily, monthly data points. - target_col: Column name in df that contains the target variable of the time series, i.e., the variable wewish to predict or analyze. Additionally, you can pass multiple time series (stacked in the dataframe) considering an additional column: - id_col: Column name in df that identifies unique time series. Each unique value in this columncorresponds to a unique time series. | |
h | Annotated | Forecast horizon. | |
freq | Optional | None | Frequency of the data. By default, the freq will be inferred automatically. See pandas’ available frequencies. |
id_col | str | unique_id | Column that identifies each serie. |
time_col | str | ds | Column that identifies each timestep, its values can be timestamps or integers. |
target_col | str | y | Column that contains the target. |
level | Optional | None | Confidence level between 0 and 100 for prediction intervals. |
quantiles | Optional | None | Quantiles to forecast, list between (0, 1).level and quantiles should not be used simultaneously.The output dataframe will have the quantile columns formatted as TimeGPT-q-(100 * q) for each q. 100 * q represents percentiles but we choose this notation to avoid having dots in column names. |
validate_api_key | bool | False | If True, validates api_key before sending requests. |
n_windows | Annotated | 1 | Number of windows to evaluate. |
step_size | Optional | None | Step size between each cross validation window. If None it will be equal to h . |
finetune_steps | Annotated | 0 | Number of steps used to finetune TimeGPT in the new data. |
finetune_loss | str | default | Loss function to use for finetuning. Options are: default , mae , mse , rmse , mape , and smape . |
clean_ex_first | bool | True | Clean exogenous signal before making forecasts using TimeGPT. |
date_features | Union | False | Features computed from the dates. Can be pandas date attributes or functions that will take the dates as input. If True automatically adds most used date features for the frequency of df . |
date_features_to_one_hot | Union | False | Apply one-hot encoding to these date features. If date_features=True , then all date features areone-hot encoded by default. |
model | str | timegpt-1 | Model to use as a string. Options are: timegpt-1 , and timegpt-1-long-horizon .We recommend using timegpt-1-long-horizon for forecastingif you want to predict more than one seasonal period given the frequency of your data. |
num_partitions | Optional | None | Number of partitions to use. If None, the number of partitions will be equal to the available parallel resources in distributed environments. |
Returns | AnyDFType | DataFrame with cross validation forecasts. |
NixtlaClient.detect_anomalies
NixtlaClient.detect_anomalies (df:~AnyDFType, freq:Optional[str]=None, id_col:str='unique_id', time_col:str='ds', target_col:str='y', level:Union[int,float]=99, clean_ex_first:bool=True, validate_api_key:bool=False, date_features:Union[bool,List[str]]=False, date_features_to_one_hot:Union[bool,List[s tr]]=False, model:Literal['azureai','timeg pt-1','timegpt-1-long- horizon']='timegpt-1', num_partitions:Opti onal[Annotated[int,Gt(gt=0)]]=None)
Detect anomalies in your time series using TimeGPT.
Type | Default | Details | |
---|---|---|---|
df | AnyDFType | The DataFrame on which the function will operate. Expected to contain at least the following columns: - time_col: Column name in df that contains the time indices of the time series. This is typically a datetimecolumn with regular intervals, e.g., hourly, daily, monthly data points. - target_col: Column name in df that contains the target variable of the time series, i.e., the variable wewish to predict or analyze. Additionally, you can pass multiple time series (stacked in the dataframe) considering an additional column: - id_col: Column name in df that identifies unique time series. Each unique value in this columncorresponds to a unique time series. | |
freq | Optional | None | Frequency of the data. By default, the freq will be inferred automatically. See pandas’ available frequencies. |
id_col | str | unique_id | Column that identifies each serie. |
time_col | str | ds | Column that identifies each timestep, its values can be timestamps or integers. |
target_col | str | y | Column that contains the target. |
level | Union | 99 | Confidence level between 0 and 100 for detecting the anomalies. |
clean_ex_first | bool | True | Clean exogenous signal before making forecasts using TimeGPT. |
validate_api_key | bool | False | If True, validates api_key before sending requests. |
date_features | Union | False | Features computed from the dates. Can be pandas date attributes or functions that will take the dates as input. If True automatically adds most used date features for the frequency of df . |
date_features_to_one_hot | Union | False | Apply one-hot encoding to these date features. If date_features=True , then all date features areone-hot encoded by default. |
model | Literal | timegpt-1 | Model to use as a string. Options are: timegpt-1 , and timegpt-1-long-horizon .We recommend using timegpt-1-long-horizon for forecastingif you want to predict more than one seasonal period given the frequency of your data. |
num_partitions | Optional | None | Number of partitions to use. If None, the number of partitions will be equal to the available parallel resources in distributed environments. |
Returns | AnyDFType | DataFrame with anomalies flagged by TimeGPT. |