api_key
and give it a try yourself!
ds | y | ||||||
---|---|---|---|---|---|---|---|
min | max | count | min | mean | median | max | |
unique_id | |||||||
FOODS_1 | 2011-01-29 | 2016-05-22 | 1941 | 0.0 | 2674.085523 | 2665.0 | 5493.0 |
FOODS_2 | 2011-01-29 | 2016-05-22 | 1941 | 0.0 | 4015.984029 | 3894.0 | 9069.0 |
FOODS_3 | 2011-01-29 | 2016-05-22 | 1941 | 10.0 | 16969.089129 | 16548.0 | 28663.0 |
HOBBIES_1 | 2011-01-29 | 2016-05-22 | 1941 | 0.0 | 2936.122617 | 2908.0 | 5009.0 |
HOBBIES_2 | 2011-01-29 | 2016-05-22 | 1941 | 0.0 | 279.053065 | 248.0 | 871.0 |
HOUSEHOLD_1 | 2011-01-29 | 2016-05-22 | 1941 | 0.0 | 6039.594539 | 5984.0 | 11106.0 |
HOUSEHOLD_2 | 2011-01-29 | 2016-05-22 | 1941 | 0.0 | 1566.840289 | 1520.0 | 2926.0 |
nixtla_client.forecast
to produce accurate, high-performance forecasts
tailored to your unique time series.
π Why Use TimeGPT over Classical Models?
- Complex Patterns: TimeGPT captures non-linear trends classical models miss.
- Minimal Preprocessing: TimeGPT requires little to no data preparation.
- Scalability: TimeGPT can efficiently scales across multiple series without retraining.
π Why Use TimeGPT over Machine Learning Models?
- Automatic Pattern Recognition: Captures complex patterns from raw data, bypassing the need for feature engineering.
- Minimal Tuning: Works well without extensive tuning.
- Scalability: Forecasts across multiple series without retraining.
π Why Use TimeGPT Over Deep Learning Models?
- Faster Setup: Quick setup and forecasting, unlike the lengthy configuration and training times of neural networks.
- Less Tuning: Performs well with minimal tuning and preprocessing, while neural networks often need extensive adjustments.
- Ease of Use: Simple deployment with high accuracy, making it accessible without deep technical expertise.
Model | RMSE | SMAPE |
---|---|---|
ARIMA | 724.9 | 5.50% |
LightGBM | 687.8 | 5.14% |
N-HiTS | 605.0 | 5.34% |
TimeGPT | 592.6 | 4.94% |
api_key
and get started today! Happy forecasting, and enjoy the
insights ahead!
Scenario | TimeGPT | Classical Models (e.g., ARIMA) | Machine Learning Models (e.g., XGB, LGBM) | Deep Learning Models (e.g., N-HITS) |
---|---|---|---|---|
Seasonal Patterns | β Performs well with minimal setup | β Handles seasonality with adjustments (e.g., SARIMA) | β Performs well with feature engineering | β Captures seasonal patterns effectively |
Non-Linear Patterns | β Excels, especially with complex non-linear patterns | β Limited performance | β Struggles without extensive feature engineering | β Performs well with non-linear relationships |
Large Dataset | β Highly scalable across many series | β Slow and resource-intensive | β Scalable with optimized implementations | β Requires significant resources for large datasets |
Small Dataset | β Performs well; requires only one data point to start | β Performs well; may struggle with very sparse data | β Performs adequately if enough features are extracted | β May need a minimum data size to learn effectively |
Preprocessing Required | β Minimal preprocessing needed | β Requires scaling, log-transform, etc., to meet model assumptions | β Requires extensive feature engineering for complex patterns | β Needs data normalization and preprocessing |
Accuracy Requirement | β Achieves high accuracy with minimal tuning | β May struggle with complex accuracy requirements | β Can achieve good accuracy with tuning | β High accuracy possible but with significant resource use |
Scalability | β Highly scalable with minimal task-specific configuration | β Not easily scalable | β Moderate scalability, with feature engineering and tuning per task | β Limited scalability due to resource demands |
Computational Resources | β Highly efficient, operates seamlessly on CPU, no GPU needed | β Light to moderate, scales poorly with large datasets | β Moderate, depends on feature complexity | β High resource consumption, often requires GPU |
Memory Requirement | β Efficient memory usage for large datasets | β Moderate memory requirements | β High memory usage for larger datasets or many series cases | β High memory consumption for larger datasets and multiple series |
Technical Requirements & Domain Knowledge | β Low; minimal technical setup and no domain expertise needed | β Low to moderate; needs understanding of stationarity | β Moderate to high; requires feature engineering and tuning | β High; complex architecture and tuning |