SAD-Net: a full spectral self-attention detail enhancement network for single image dehazing

Abstract Single-image dehazing technology plays a significant role in video surveillance and intelligent transportation. However, existing dehazing methods using vanilla convolution only extract features in the temporal domain and lack the ability to capture multi-directional information. To address...

Full description

Saved in:
Bibliographic Details
Main Authors: Qingjun Niu, Kun Wu, Jialu Zhang, Zhenqi Han, Lizhuang Liu
Format: Article
Language:English
Published: Nature Portfolio 2025-04-01
Series:Scientific Reports
Online Access:https://doi.org/10.1038/s41598-025-92061-1
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract Single-image dehazing technology plays a significant role in video surveillance and intelligent transportation. However, existing dehazing methods using vanilla convolution only extract features in the temporal domain and lack the ability to capture multi-directional information. To address the aforementioned issues, we design a new full spectral attention-based detail enhancement dehazing network, named SAD-Net. SAD-Net adopts a U-Net-like structure and integrates Spectral Detail Enhancement Convolution (SDEC) and Frequency-Guided Attention (FGA). SDEC combines wavelet transform and difference convolution(DC) to enhance high-frequency features while preserving low-frequency information. FGA detects haze-induced discrepancies and fine-tunes feature modulation. Experimental results show that SAD-Net outperforms six other dehazing networks on the Dense-Haze, NH-Haze, RESIDE and I-Haze datasets. Specifically, it increases the peak signal-to-noise ratio (PSNR) to 17.16 dB on the Dense-Haze dataset, surpassing the current state-of-the-art (SOTA) methods. Additionally, SAD-Net achieves excellent dehazing performance on an external dataset without any prior training.
ISSN:2045-2322