Adaptive Context-Aware Generative Adversarial Network for Low-quality Image Enhancement
Low-quality image enhancement methods can effectively improve image quality and details, which have attracted great attention in various fields. However, current methods still face with two issues: (1) They commonly earn a deterministic generation mapping between low-quality and normal images via re...
Saved in:
| Main Authors: | , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Tamkang University Press
2025-06-01
|
| Series: | Journal of Applied Science and Engineering |
| Subjects: | |
| Online Access: | http://jase.tku.edu.tw/articles/jase-202601-29-01-0012 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1850157298633146368 |
|---|---|
| author | Xingyu Pan Fengling Chen |
| author_facet | Xingyu Pan Fengling Chen |
| author_sort | Xingyu Pan |
| collection | DOAJ |
| description | Low-quality image enhancement methods can effectively improve image quality and details, which have attracted great attention in various fields. However, current methods still face with two issues: (1) They commonly earn a deterministic generation mapping between low-quality and normal images via relying on pixel-level reconstruction, leading to improper brightness and noise in the enhancing process. (2) They use only one type of generative model, either explicit or implicit, which limits flexibility and efficiency of models. To this end, a novel flow-based generative adversarial network with dual attention (FGAN-DA) is devised for data generation. Specifically, FGAN-DA constructs a hybrid generative model via combining explicit and implicit components within the GAN architecture, which effectively alleviates detail blurred and singularity caused by sole generation modeling. FGAN-DA comprises the dual attention feature extraction, invertible flow generation network, the Markov discriminant network. The three modules seamlessly collaborate in enhancing images with good perceptual quality, which effectively boosts the performance of FGAN-DA. Finally, quantitative metrics and visual quality evaluations demonstrate that FGAN-DA sets a new baseline in can generate images with good perceptual quality. |
| format | Article |
| id | doaj-art-d707f82750bc4788bc3b562ec68c4f59 |
| institution | OA Journals |
| issn | 2708-9967 2708-9975 |
| language | English |
| publishDate | 2025-06-01 |
| publisher | Tamkang University Press |
| record_format | Article |
| series | Journal of Applied Science and Engineering |
| spelling | doaj-art-d707f82750bc4788bc3b562ec68c4f592025-08-20T02:24:13ZengTamkang University PressJournal of Applied Science and Engineering2708-99672708-99752025-06-0129112112810.6180/jase.202601_29(1).0012Adaptive Context-Aware Generative Adversarial Network for Low-quality Image EnhancementXingyu Pan0Fengling Chen1School of Electronic and Electrical Engineering, Zhengzhou University of Science and Technology, Zhengzhou, 450064, ChinaZhengzhou Electric Power College, Zhengzhou, 450003, ChinaLow-quality image enhancement methods can effectively improve image quality and details, which have attracted great attention in various fields. However, current methods still face with two issues: (1) They commonly earn a deterministic generation mapping between low-quality and normal images via relying on pixel-level reconstruction, leading to improper brightness and noise in the enhancing process. (2) They use only one type of generative model, either explicit or implicit, which limits flexibility and efficiency of models. To this end, a novel flow-based generative adversarial network with dual attention (FGAN-DA) is devised for data generation. Specifically, FGAN-DA constructs a hybrid generative model via combining explicit and implicit components within the GAN architecture, which effectively alleviates detail blurred and singularity caused by sole generation modeling. FGAN-DA comprises the dual attention feature extraction, invertible flow generation network, the Markov discriminant network. The three modules seamlessly collaborate in enhancing images with good perceptual quality, which effectively boosts the performance of FGAN-DA. Finally, quantitative metrics and visual quality evaluations demonstrate that FGAN-DA sets a new baseline in can generate images with good perceptual quality.http://jase.tku.edu.tw/articles/jase-202601-29-01-0012data generationdual attentionflow generative network |
| spellingShingle | Xingyu Pan Fengling Chen Adaptive Context-Aware Generative Adversarial Network for Low-quality Image Enhancement Journal of Applied Science and Engineering data generation dual attention flow generative network |
| title | Adaptive Context-Aware Generative Adversarial Network for Low-quality Image Enhancement |
| title_full | Adaptive Context-Aware Generative Adversarial Network for Low-quality Image Enhancement |
| title_fullStr | Adaptive Context-Aware Generative Adversarial Network for Low-quality Image Enhancement |
| title_full_unstemmed | Adaptive Context-Aware Generative Adversarial Network for Low-quality Image Enhancement |
| title_short | Adaptive Context-Aware Generative Adversarial Network for Low-quality Image Enhancement |
| title_sort | adaptive context aware generative adversarial network for low quality image enhancement |
| topic | data generation dual attention flow generative network |
| url | http://jase.tku.edu.tw/articles/jase-202601-29-01-0012 |
| work_keys_str_mv | AT xingyupan adaptivecontextawaregenerativeadversarialnetworkforlowqualityimageenhancement AT fenglingchen adaptivecontextawaregenerativeadversarialnetworkforlowqualityimageenhancement |