MCATD: Multi-Scale Contextual Attention Transformer Diffusion for Unsupervised Low-Light Image Enhancement

Low-light image enhancement (LLIE) remains a challenging task due to the complex degradation patterns in images captured under insufficient illumination, including non-linear intensity mappings, spatially-varying noise distributions, and content-dependent color distortions. Despite significant advan...

Full description

Saved in:
Bibliographic Details
Main Authors: Cheng da, Yongsheng Qian, Junwei Zeng, Xuting Wei, Futao Zhang
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/11014086/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Low-light image enhancement (LLIE) remains a challenging task due to the complex degradation patterns in images captured under insufficient illumination, including non-linear intensity mappings, spatially-varying noise distributions, and content-dependent color distortions. Despite significant advances, existing methods struggle with three fundamental challenges: 1) difficulty in simultaneously preserving structural details while reducing noise, 2) limited generalization across diverse lighting conditions and scene types, and 3) computational inefficiency when processing complex natural scenes. While recent diffusion-based methods have shown promise, they often struggle with generalization and require paired training data. We propose MCATD, a novel unsupervised framework that integrates adaptive sampling, multi-scale feature extraction, and dynamic enhancement capabilities into diffusion models for LLIE. The framework consists of three key components: 1) a Dynamic Adaptive Diffusion Sampling (DADS) strategy that adjusts sampling steps based on image complexity, 2) a Multi-scale Contextual Attention Transformer (MCAT) network that captures features at different scales with attention mechanisms, and 3) a Multi-scale Dynamic Structure-Preserving (MDSP) loss that preserves image structure while optimizing perceptual quality. Experimental results on multiple benchmarks demonstrate that our method outperforms state-of-the-art unsupervised approaches and achieves comparable performance to supervised methods while maintaining better generalization ability. Furthermore, ablation studies validate the effectiveness of each proposed component. The proposed framework not only advances the field of unsupervised LLIE but also provides insights into leveraging diffusion models for image restoration tasks.
ISSN:2169-3536