TFF-Net: A Feature Fusion Graph Neural Network-Based Vehicle Type Recognition Approach for Low-Light Conditions

Accurate vehicle type recognition in low-light environments remains a critical challenge for intelligent transportation systems (ITSs). To address the performance degradation caused by insufficient lighting, complex backgrounds, and light interference, this paper proposes a Twin-Stream Feature Fusio...

Full description

Saved in:
Bibliographic Details
Main Authors: Huizhi Xu, Wenting Tan, Yamei Li, Yue Tian
Format: Article
Language:English
Published: MDPI AG 2025-06-01
Series:Sensors
Subjects:
Online Access:https://www.mdpi.com/1424-8220/25/12/3613
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Accurate vehicle type recognition in low-light environments remains a critical challenge for intelligent transportation systems (ITSs). To address the performance degradation caused by insufficient lighting, complex backgrounds, and light interference, this paper proposes a Twin-Stream Feature Fusion Graph Neural Network (TFF-Net) model. The model employs multi-scale convolutional operations combined with an Efficient Channel Attention (ECA) module to extract discriminative local features, while independent convolutional layers capture hierarchical global representations. These features are mapped as nodes to construct fully connected graph structures. Hybrid graph neural networks (GNNs) process the graph structures and model spatial dependencies and semantic associations. TFF-Net enhances the representation of features by fusing local details and global context information from the output of GNNs. To further improve its robustness, we propose an Adaptive Weighted Fusion-Bagging (AWF-Bagging) algorithm, which dynamically assigns weights to base classifiers based on their F1 scores. TFF-Net also includes dynamic feature weighting and label smoothing techniques for solving the category imbalance problem. Finally, the proposed TFF-Net is integrated into YOLOv11n (a lightweight real-time object detector) with an improved adaptive loss function. For experimental validation in low-light scenarios, we constructed the low-light vehicle dataset VDD-Light based on the public dataset UA-DETRAC. Experimental results demonstrate that our model achieves 2.6% and 2.2% improvements in mAP50 and mAP50-95 metrics over the baseline model. Compared to mainstream models and methods, the proposed model shows excellent performance and practical deployment potential.
ISSN:1424-8220