Harnessing Spatial-Frequency Information for Enhanced Image Restoration

Image restoration aims to recover high-quality, clear images from those that have suffered visibility loss due to various types of degradation. Numerous deep learning-based approaches for image restoration have shown substantial improvements. However, there are two notable limitations: (a) Despite s...

Full description

Saved in:
Bibliographic Details
Main Authors: Cheol-Hoon Park, Hyun-Duck Choi, Myo-Taeg Lim
Format: Article
Language:English
Published: MDPI AG 2025-02-01
Series:Applied Sciences
Subjects:
Online Access:https://www.mdpi.com/2076-3417/15/4/1856
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Image restoration aims to recover high-quality, clear images from those that have suffered visibility loss due to various types of degradation. Numerous deep learning-based approaches for image restoration have shown substantial improvements. However, there are two notable limitations: (a) Despite substantial spectral mismatches in the frequency domain between clean and degraded images, only a few approaches leverage information from the frequency domain. (b) Variants of attention mechanisms have been proposed for high-resolution images in low-level vision tasks, but these methods still require inherently high computational costs. To address these issues, we propose a Frequency-Aware Network (FreANet) for image restoration, which consists of two simple yet effective modules. We utilize a multi-branch/domain module that integrates latent features from the frequency and spatial domains using the discrete Fourier transform (DFT) and complex convolutional neural networks. Furthermore, we introduce a multi-scale pooling attention mechanism that employs average pooling along the row and column axes. We conducted extensive experiments on image restoration tasks, including defocus deblurring, motion deblurring, dehazing, and low-light enhancement. The proposed FreANet demonstrates remarkable results compared to previous approaches to these tasks.
ISSN:2076-3417