Squeeze-EnGAN: Memory Efficient and Unsupervised Low-Light Image Enhancement for Intelligent Vehicles

Intelligent vehicles, such as autonomous cars, drones, and robots, rely on sensors to gather environmental information and respond accordingly. RGB cameras are commonly used due to their low cost and high resolution but are limited in low-light conditions. While employing LiDAR or specialized camera...

Full description

Saved in:
Bibliographic Details
Main Authors: Haegyo In, Juhum Kweon, Changjoo Moon
Format: Article
Language:English
Published: MDPI AG 2025-03-01
Series:Sensors
Subjects:
Online Access:https://www.mdpi.com/1424-8220/25/6/1825
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Intelligent vehicles, such as autonomous cars, drones, and robots, rely on sensors to gather environmental information and respond accordingly. RGB cameras are commonly used due to their low cost and high resolution but are limited in low-light conditions. While employing LiDAR or specialized cameras can address this issue, these solutions often incur high costs. Deep learning-based low-light image enhancement (LLIE) methods offer an alternative, but existing models struggle to adapt to road scenes. Furthermore, most LLIE models rely on supervised training but are heavily constrained by the lack of low-light and normal-light paired datasets. In particular, obtaining paired datasets for driving scenes is extremely challenging. To address these issues, this paper proposes Squeeze-EnGAN, a memory-efficient, GAN-based LLIE method capable of unsupervised learning without paired image datasets. Squeeze-EnGAN incorporates a fire module into a U-net architecture, substantially reducing the number of parameters and Multiply-Accumulate Operations (MACs) compared to its base model, EnlightenGAN. Additionally, Squeeze-EnGAN achieves real-time performance on devices like Jetson Xavier (0.061 s). Significantly, enhanced images improve object detection performance over original images, demonstrating the model’s potential to aid high-level vision tasks in intelligent vehicles.
ISSN:1424-8220