This project implements a comprehensive benchmark to evaluate the performance of the PatchTST model (Patch Time Series Transformer) on multiple time series datasets and different forecasting horizons.
The benchmark evaluated the performance of PatchTST on 5 distinct datasets (Weather, Pedestrians, Solar, Tourism, and Traffic) with horizons from 12 to 96 steps.
| Dataset | Average sMAPE |
|---|---|
| Weather | 153.1% |
| Solar | 146.9% |
| Pedestrians | 66.7% |
| Traffic | 42.9% |
| Tourism | 42.7% |
Below, the graphs show the evolution of the error (RMSE and sMAPE) as the forecasting horizon increases.
The heatmap allows for the identification of which datasets and horizons the PatchTST model exhibits the most stability.
PatchTST is a state-of-the-art Transformer architecture for time series forecasting. This project was developed as part of the MEISSA training program (LIAD/HP), exploring state-of-the-art forecasting techniques.
- Python 3.x
- PyTorch - Deep Learning Framework
- Transformers (Hugging Face) - PatchTST Implementation
- Pandas & Matplotlib - Data manipulation and visualization
- Architecture: Patch-based approach with self-attention.
- Datasets: 5 diverse datasets from the Monash Time Series Repository.
- Benchmark: Systematic testing on horizons of [12, 24, 48, 96] steps.
Click the "Open in Colab" badge at the top of this README to run the notebook directly in your browser.
Luiz Anselmo Medeiros Lima
- GitHub: @luizmlima
- LinkedIn: Luiz Anselmo Lima
This project was developed as part of the MEISSA training program, a partnership between the Laboratório de Inteligência Artificial e Arquiteturas Dedicadas (LIAD) and HP.


