SwinLightGAN a study of low-light image enhancement algorithms using depth residuals and transformer techniques

Abstract Contemporary algorithms for enhancing images in low-light conditions prioritize improving brightness and contrast but often neglect improving image details. This study introduces the Swin Transformer-based Light-enhancing Generative Adversarial Network (SwinLightGAN), a novel generative adv...

Full description

Saved in:
Bibliographic Details
Main Authors: Min He, Rugang Wang, Mingyang Zhang, Feiyang Lv, Yuanyuan Wang, Feng Zhou, Xuesheng Bian
Format: Article
Language:English
Published: Nature Portfolio 2025-04-01
Series:Scientific Reports
Online Access:https://doi.org/10.1038/s41598-025-95329-8
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1849726247035207680
author Min He
Rugang Wang
Mingyang Zhang
Feiyang Lv
Yuanyuan Wang
Feng Zhou
Xuesheng Bian
author_facet Min He
Rugang Wang
Mingyang Zhang
Feiyang Lv
Yuanyuan Wang
Feng Zhou
Xuesheng Bian
author_sort Min He
collection DOAJ
description Abstract Contemporary algorithms for enhancing images in low-light conditions prioritize improving brightness and contrast but often neglect improving image details. This study introduces the Swin Transformer-based Light-enhancing Generative Adversarial Network (SwinLightGAN), a novel generative adversarial network (GAN) that effectively enhances image details under low-light conditions. The network integrates a generator model based on a Residual Jumping U-shaped Network (U-Net) architecture for precise local detail extraction with an illumination network enhanced by Shifted Window Transformer (Swin Transformer) technology that captures multi-scale spatial features and global contexts. This combination produces high-quality images that resemble those taken in normal lighting conditions, retaining intricate details. Through adversarial training that employs discriminators operating at multiple scales and a blend of loss functions, SwinLightGAN ensures a seamless distinction between generated and authentic images, ensuring superior enhancement quality. Extensive experimental analysis on multiple unpaired datasets demonstrates SwinLightGAN’s outstanding performance. The system achieves Naturalness Image Quality Evaluator (NIQE) scores ranging from 5.193 to 5.397, Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE) scores from 28.879 to 32.040, and Patch-based Image Quality Evaluator (PIQE) scores from 38.280 to 44.479, highlighting its efficacy in delivering high-quality enhancements across diverse metrics.
format Article
id doaj-art-5af5f66e1d5c4e77b8922051b7158c0e
institution DOAJ
issn 2045-2322
language English
publishDate 2025-04-01
publisher Nature Portfolio
record_format Article
series Scientific Reports
spelling doaj-art-5af5f66e1d5c4e77b8922051b7158c0e2025-08-20T03:10:14ZengNature PortfolioScientific Reports2045-23222025-04-0115111710.1038/s41598-025-95329-8SwinLightGAN a study of low-light image enhancement algorithms using depth residuals and transformer techniquesMin He0Rugang Wang1Mingyang Zhang2Feiyang Lv3Yuanyuan Wang4Feng Zhou5Xuesheng Bian6School of Information Engineering, Yancheng Institute of TechnologySchool of Information Engineering, Yancheng Institute of TechnologySchool of Information Engineering, Yancheng Institute of TechnologySchool of Information Engineering, Yancheng Institute of TechnologySchool of Information Engineering, Yancheng Institute of TechnologySchool of Information Engineering, Yancheng Institute of TechnologySchool of Information Engineering, Yancheng Institute of TechnologyAbstract Contemporary algorithms for enhancing images in low-light conditions prioritize improving brightness and contrast but often neglect improving image details. This study introduces the Swin Transformer-based Light-enhancing Generative Adversarial Network (SwinLightGAN), a novel generative adversarial network (GAN) that effectively enhances image details under low-light conditions. The network integrates a generator model based on a Residual Jumping U-shaped Network (U-Net) architecture for precise local detail extraction with an illumination network enhanced by Shifted Window Transformer (Swin Transformer) technology that captures multi-scale spatial features and global contexts. This combination produces high-quality images that resemble those taken in normal lighting conditions, retaining intricate details. Through adversarial training that employs discriminators operating at multiple scales and a blend of loss functions, SwinLightGAN ensures a seamless distinction between generated and authentic images, ensuring superior enhancement quality. Extensive experimental analysis on multiple unpaired datasets demonstrates SwinLightGAN’s outstanding performance. The system achieves Naturalness Image Quality Evaluator (NIQE) scores ranging from 5.193 to 5.397, Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE) scores from 28.879 to 32.040, and Patch-based Image Quality Evaluator (PIQE) scores from 38.280 to 44.479, highlighting its efficacy in delivering high-quality enhancements across diverse metrics.https://doi.org/10.1038/s41598-025-95329-8
spellingShingle Min He
Rugang Wang
Mingyang Zhang
Feiyang Lv
Yuanyuan Wang
Feng Zhou
Xuesheng Bian
SwinLightGAN a study of low-light image enhancement algorithms using depth residuals and transformer techniques
Scientific Reports
title SwinLightGAN a study of low-light image enhancement algorithms using depth residuals and transformer techniques
title_full SwinLightGAN a study of low-light image enhancement algorithms using depth residuals and transformer techniques
title_fullStr SwinLightGAN a study of low-light image enhancement algorithms using depth residuals and transformer techniques
title_full_unstemmed SwinLightGAN a study of low-light image enhancement algorithms using depth residuals and transformer techniques
title_short SwinLightGAN a study of low-light image enhancement algorithms using depth residuals and transformer techniques
title_sort swinlightgan a study of low light image enhancement algorithms using depth residuals and transformer techniques
url https://doi.org/10.1038/s41598-025-95329-8
work_keys_str_mv AT minhe swinlightganastudyoflowlightimageenhancementalgorithmsusingdepthresidualsandtransformertechniques
AT rugangwang swinlightganastudyoflowlightimageenhancementalgorithmsusingdepthresidualsandtransformertechniques
AT mingyangzhang swinlightganastudyoflowlightimageenhancementalgorithmsusingdepthresidualsandtransformertechniques
AT feiyanglv swinlightganastudyoflowlightimageenhancementalgorithmsusingdepthresidualsandtransformertechniques
AT yuanyuanwang swinlightganastudyoflowlightimageenhancementalgorithmsusingdepthresidualsandtransformertechniques
AT fengzhou swinlightganastudyoflowlightimageenhancementalgorithmsusingdepthresidualsandtransformertechniques
AT xueshengbian swinlightganastudyoflowlightimageenhancementalgorithmsusingdepthresidualsandtransformertechniques