Adaptive Context-Aware Generative Adversarial Network for Low-quality Image Enhancement

Low-quality image enhancement methods can effectively improve image quality and details, which have attracted great attention in various fields. However, current methods still face with two issues: (1) They commonly earn a deterministic generation mapping between low-quality and normal images via re...

Full description

Saved in:
Bibliographic Details
Main Authors: Xingyu Pan, Fengling Chen
Format: Article
Language:English
Published: Tamkang University Press 2025-06-01
Series:Journal of Applied Science and Engineering
Subjects:
Online Access:http://jase.tku.edu.tw/articles/jase-202601-29-01-0012
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Low-quality image enhancement methods can effectively improve image quality and details, which have attracted great attention in various fields. However, current methods still face with two issues: (1) They commonly earn a deterministic generation mapping between low-quality and normal images via relying on pixel-level reconstruction, leading to improper brightness and noise in the enhancing process. (2) They use only one type of generative model, either explicit or implicit, which limits flexibility and efficiency of models. To this end, a novel flow-based generative adversarial network with dual attention (FGAN-DA) is devised for data generation. Specifically, FGAN-DA constructs a hybrid generative model via combining explicit and implicit components within the GAN architecture, which effectively alleviates detail blurred and singularity caused by sole generation modeling. FGAN-DA comprises the dual attention feature extraction, invertible flow generation network, the Markov discriminant network. The three modules seamlessly collaborate in enhancing images with good perceptual quality, which effectively boosts the performance of FGAN-DA. Finally, quantitative metrics and visual quality evaluations demonstrate that FGAN-DA sets a new baseline in can generate images with good perceptual quality.
ISSN:2708-9967
2708-9975