Test-Time Training with Adaptive Memory for Traffic Accident Severity Prediction
Traffic accident prediction is essential for improving road safety and optimizing intelligent transportation systems. However, deep learning models often struggle with distribution shifts and class imbalance, leading to degraded performance in real-world applications. While distribution shift is a c...
Saved in:
| Main Authors: | , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
MDPI AG
2025-05-01
|
| Series: | Computers |
| Subjects: | |
| Online Access: | https://www.mdpi.com/2073-431X/14/5/186 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1849711163239038976 |
|---|---|
| author | Duo Peng Weiqi Yan |
| author_facet | Duo Peng Weiqi Yan |
| author_sort | Duo Peng |
| collection | DOAJ |
| description | Traffic accident prediction is essential for improving road safety and optimizing intelligent transportation systems. However, deep learning models often struggle with distribution shifts and class imbalance, leading to degraded performance in real-world applications. While distribution shift is a common challenge in machine learning, Transformer-based models—despite their ability to capture long-term dependencies—often lack mechanisms for dynamic adaptation during inferencing. In this paper, we propose a TTT-Enhanced Transformer that incorporates Test-Time Training (TTT), enabling the model to refine its parameters during inferencing through a self-supervised auxiliary task. To further boost performance, an Adaptive Memory Layer (AML), a Feature Pyramid Network (FPN), Class-Balanced Attention (CBA), and Focal Loss are integrated to address multi-scale, long-term, and imbalance-related challenges. Our experimental results show that our model achieved an overall accuracy of 96.86% and a severe accident recall of 95.8%, outperforming the strongest Transformer baseline by 5.65% in accuracy and 9.6% in recall. The results of our confusion matrix and ROC analyses confirm our model’s superior classification balance and discriminatory power. These findings highlight the potential of our approach in enhancing real-time adaptability and robustness under shifting data distributions and class imbalances in intelligent transportation systems. |
| format | Article |
| id | doaj-art-dfadb9a1af084890bddd80779e927884 |
| institution | DOAJ |
| issn | 2073-431X |
| language | English |
| publishDate | 2025-05-01 |
| publisher | MDPI AG |
| record_format | Article |
| series | Computers |
| spelling | doaj-art-dfadb9a1af084890bddd80779e9278842025-08-20T03:14:41ZengMDPI AGComputers2073-431X2025-05-0114518610.3390/computers14050186Test-Time Training with Adaptive Memory for Traffic Accident Severity PredictionDuo Peng0Weiqi Yan1Department of Computer and Information Science, Auckland University of Technology, Auckland 1010, New ZealandDepartment of Computer and Information Science, Auckland University of Technology, Auckland 1010, New ZealandTraffic accident prediction is essential for improving road safety and optimizing intelligent transportation systems. However, deep learning models often struggle with distribution shifts and class imbalance, leading to degraded performance in real-world applications. While distribution shift is a common challenge in machine learning, Transformer-based models—despite their ability to capture long-term dependencies—often lack mechanisms for dynamic adaptation during inferencing. In this paper, we propose a TTT-Enhanced Transformer that incorporates Test-Time Training (TTT), enabling the model to refine its parameters during inferencing through a self-supervised auxiliary task. To further boost performance, an Adaptive Memory Layer (AML), a Feature Pyramid Network (FPN), Class-Balanced Attention (CBA), and Focal Loss are integrated to address multi-scale, long-term, and imbalance-related challenges. Our experimental results show that our model achieved an overall accuracy of 96.86% and a severe accident recall of 95.8%, outperforming the strongest Transformer baseline by 5.65% in accuracy and 9.6% in recall. The results of our confusion matrix and ROC analyses confirm our model’s superior classification balance and discriminatory power. These findings highlight the potential of our approach in enhancing real-time adaptability and robustness under shifting data distributions and class imbalances in intelligent transportation systems.https://www.mdpi.com/2073-431X/14/5/186test-time trainingtraffic accident predictiontransformer networkself-supervised learningadaptive memoryclass-balanced learning |
| spellingShingle | Duo Peng Weiqi Yan Test-Time Training with Adaptive Memory for Traffic Accident Severity Prediction Computers test-time training traffic accident prediction transformer network self-supervised learning adaptive memory class-balanced learning |
| title | Test-Time Training with Adaptive Memory for Traffic Accident Severity Prediction |
| title_full | Test-Time Training with Adaptive Memory for Traffic Accident Severity Prediction |
| title_fullStr | Test-Time Training with Adaptive Memory for Traffic Accident Severity Prediction |
| title_full_unstemmed | Test-Time Training with Adaptive Memory for Traffic Accident Severity Prediction |
| title_short | Test-Time Training with Adaptive Memory for Traffic Accident Severity Prediction |
| title_sort | test time training with adaptive memory for traffic accident severity prediction |
| topic | test-time training traffic accident prediction transformer network self-supervised learning adaptive memory class-balanced learning |
| url | https://www.mdpi.com/2073-431X/14/5/186 |
| work_keys_str_mv | AT duopeng testtimetrainingwithadaptivememoryfortrafficaccidentseverityprediction AT weiqiyan testtimetrainingwithadaptivememoryfortrafficaccidentseverityprediction |