Real-Time Runway Detection Using Dual-Modal Fusion of Visible and Infrared Data
Advancements in aviation technology have made intelligent navigation systems essential for improving flight safety and efficiency, particularly in low-visibility conditions. Radar and GPS systems face limitations in bad weather, making visible–infrared sensor fusion a promising alternative. This stu...
Saved in:
| Main Authors: | , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
MDPI AG
2025-02-01
|
| Series: | Remote Sensing |
| Subjects: | |
| Online Access: | https://www.mdpi.com/2072-4292/17/4/669 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1850081141284929536 |
|---|---|
| author | Lichun Yang Jianghao Wu Hongguang Li Chunlei Liu Shize Wei |
| author_facet | Lichun Yang Jianghao Wu Hongguang Li Chunlei Liu Shize Wei |
| author_sort | Lichun Yang |
| collection | DOAJ |
| description | Advancements in aviation technology have made intelligent navigation systems essential for improving flight safety and efficiency, particularly in low-visibility conditions. Radar and GPS systems face limitations in bad weather, making visible–infrared sensor fusion a promising alternative. This study proposes a salient object detection (SOD) method that integrates visible and infrared sensors for robust airport runway detection in complex environments. We introduce a large-scale visible–infrared runway dataset (RDD5000) and develop a SOD algorithm capable of detecting salient targets from unaligned visible and infrared images. To enable real-time processing, we design a lightweight dual-modal fusion network (DCFNet) with an independent–shared encoder and a cross-layer attention mechanism to enhance feature extraction and fusion. Experimental results show that the MobileNetV2-based lightweight version achieves 155 FPS on a single GPU, significantly outperforming previous methods such as DCNet (4.878 FPS) and SACNet (27 FPS), making it suitable for real-time deployment on airborne systems. This work offers a novel and efficient solution for intelligent navigation in aviation. |
| format | Article |
| id | doaj-art-afa9e2f2d90c4a4c8b3018ee64067dbd |
| institution | DOAJ |
| issn | 2072-4292 |
| language | English |
| publishDate | 2025-02-01 |
| publisher | MDPI AG |
| record_format | Article |
| series | Remote Sensing |
| spelling | doaj-art-afa9e2f2d90c4a4c8b3018ee64067dbd2025-08-20T02:44:47ZengMDPI AGRemote Sensing2072-42922025-02-0117466910.3390/rs17040669Real-Time Runway Detection Using Dual-Modal Fusion of Visible and Infrared DataLichun Yang0Jianghao Wu1Hongguang Li2Chunlei Liu3Shize Wei4School of Transportation Science and Engineering, Beihang University, Beijing 100191, ChinaSchool of Transportation Science and Engineering, Beihang University, Beijing 100191, ChinaInstitute of Unmanned System, Beihang University, Beijing 100191, ChinaInstitute of Unmanned System, Beihang University, Beijing 100191, ChinaSchool of Electronics and Information Engineering, Beihang University, Beijing 100191, ChinaAdvancements in aviation technology have made intelligent navigation systems essential for improving flight safety and efficiency, particularly in low-visibility conditions. Radar and GPS systems face limitations in bad weather, making visible–infrared sensor fusion a promising alternative. This study proposes a salient object detection (SOD) method that integrates visible and infrared sensors for robust airport runway detection in complex environments. We introduce a large-scale visible–infrared runway dataset (RDD5000) and develop a SOD algorithm capable of detecting salient targets from unaligned visible and infrared images. To enable real-time processing, we design a lightweight dual-modal fusion network (DCFNet) with an independent–shared encoder and a cross-layer attention mechanism to enhance feature extraction and fusion. Experimental results show that the MobileNetV2-based lightweight version achieves 155 FPS on a single GPU, significantly outperforming previous methods such as DCNet (4.878 FPS) and SACNet (27 FPS), making it suitable for real-time deployment on airborne systems. This work offers a novel and efficient solution for intelligent navigation in aviation.https://www.mdpi.com/2072-4292/17/4/669intelligent navigationsalient object detectiondual-modal fusionairport runway detectiondeep learninglightweight network |
| spellingShingle | Lichun Yang Jianghao Wu Hongguang Li Chunlei Liu Shize Wei Real-Time Runway Detection Using Dual-Modal Fusion of Visible and Infrared Data Remote Sensing intelligent navigation salient object detection dual-modal fusion airport runway detection deep learning lightweight network |
| title | Real-Time Runway Detection Using Dual-Modal Fusion of Visible and Infrared Data |
| title_full | Real-Time Runway Detection Using Dual-Modal Fusion of Visible and Infrared Data |
| title_fullStr | Real-Time Runway Detection Using Dual-Modal Fusion of Visible and Infrared Data |
| title_full_unstemmed | Real-Time Runway Detection Using Dual-Modal Fusion of Visible and Infrared Data |
| title_short | Real-Time Runway Detection Using Dual-Modal Fusion of Visible and Infrared Data |
| title_sort | real time runway detection using dual modal fusion of visible and infrared data |
| topic | intelligent navigation salient object detection dual-modal fusion airport runway detection deep learning lightweight network |
| url | https://www.mdpi.com/2072-4292/17/4/669 |
| work_keys_str_mv | AT lichunyang realtimerunwaydetectionusingdualmodalfusionofvisibleandinfrareddata AT jianghaowu realtimerunwaydetectionusingdualmodalfusionofvisibleandinfrareddata AT hongguangli realtimerunwaydetectionusingdualmodalfusionofvisibleandinfrareddata AT chunleiliu realtimerunwaydetectionusingdualmodalfusionofvisibleandinfrareddata AT shizewei realtimerunwaydetectionusingdualmodalfusionofvisibleandinfrareddata |