Evaluation of Encoder-Only Transformer for Multi-Step Traffic Flow Prediction

Traffic flow prediction is critical for Intelligent Transportation Systems to alleviate congestion and optimize traffic management. The existing basic Encoder-Decoder Transformer model for multi-step prediction requires high computational complexity, making them less efficient for real-time applicat...

Full description

Saved in:
Bibliographic Details
Main Authors: Mas Omar, Fitri Yakub, Andika Aji Wijaya, Ahmad Faiz Mohammad, Mohd Nazmin Maslan, Inge Dhamanti, Shi Zhongchao
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/11037770/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Traffic flow prediction is critical for Intelligent Transportation Systems to alleviate congestion and optimize traffic management. The existing basic Encoder-Decoder Transformer model for multi-step prediction requires high computational complexity, making them less efficient for real-time applications. Therefore, this paper presents an Encoder-Only Transformer that eliminates the decoder component while evaluating the model’s ability to capture long-term spatial-temporal dependencies. This simplification was found to reduce computational overhead while maintaining a reasonable predictive accuracy comparable to the basic Encoder-Decoder Transformer and Long Short-Term Memory (LSTM) models. Experimental evaluations on two real-world datasets, Minnesota and California, for hourly prediction of 6, 12, and 24-hour horizon tasks, demonstrate the proposed model’s effectiveness. The proposed Encoder-Only Transformer outperforms LSTM networks across all horizons task for the Minnesota dataset, achieving up to 17.33% improvement in Mean Absolute Error. The proposed model excels in 24-hour horizon task for the California dataset but underperforms for shorter horizons compared to the LSTM and basic Encoder-Decoder Transformer models. These findings highlight the Encoder-Only Transformer’s application for multi-step traffic flow prediction while emphasizing dataset-specific variations in model performance.
ISSN:2169-3536