Generative Adversarial Network-Based Lightweight High-Dynamic-Range Image Reconstruction Model

The generation of High-Dynamic-Range (HDR) images is essential for capturing details at various brightness levels, but current reconstruction methods, using deep learning techniques, often require significant computational resources, limiting their applicability on devices with moderate resources. I...

Full description

Saved in:
Bibliographic Details
Main Authors: Gustavo de Souza Ferreti, Thuanne Paixão, Ana Beatriz Alvarez
Format: Article
Language:English
Published: MDPI AG 2025-04-01
Series:Applied Sciences
Subjects:
Online Access:https://www.mdpi.com/2076-3417/15/9/4801
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The generation of High-Dynamic-Range (HDR) images is essential for capturing details at various brightness levels, but current reconstruction methods, using deep learning techniques, often require significant computational resources, limiting their applicability on devices with moderate resources. In this context, this paper presents a lightweight architecture for reconstructing HDR images from three Low-Dynamic-Range inputs. The proposed model is based on Generative Adversarial Networks and replaces traditional convolutions with depthwise separable convolutions, reducing the number of parameters while maintaining high visual quality and minimizing luminance artifacts. The evaluation of the proposal is conducted through quantitative, qualitative, and computational cost analyses based on the number of parameters and FLOPs. Regarding the qualitative analysis, a comparison between the models was performed using samples that present reconstruction challenges. The proposed model achieves a PSNR-<inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mi>μ</mi></semantics></math></inline-formula> of 43.51 dB and SSIM-<inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mi>μ</mi></semantics></math></inline-formula> of 0.9917, achieving competitive quality metrics comparable to HDR-GAN while reducing the computational cost by 6× in FLOPs and 7× in the number of parameters, using approximately half the GPU memory consumption, demonstrating an effective balance between visual fidelity and efficiency.
ISSN:2076-3417