SAD-Net: a full spectral self-attention detail enhancement network for single image dehazing

Abstract Single-image dehazing technology plays a significant role in video surveillance and intelligent transportation. However, existing dehazing methods using vanilla convolution only extract features in the temporal domain and lack the ability to capture multi-directional information. To address...

Full description

Saved in:
Bibliographic Details
Main Authors: Qingjun Niu, Kun Wu, Jialu Zhang, Zhenqi Han, Lizhuang Liu
Format: Article
Language:English
Published: Nature Portfolio 2025-04-01
Series:Scientific Reports
Online Access:https://doi.org/10.1038/s41598-025-92061-1
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1849737752127471616
author Qingjun Niu
Kun Wu
Jialu Zhang
Zhenqi Han
Lizhuang Liu
author_facet Qingjun Niu
Kun Wu
Jialu Zhang
Zhenqi Han
Lizhuang Liu
author_sort Qingjun Niu
collection DOAJ
description Abstract Single-image dehazing technology plays a significant role in video surveillance and intelligent transportation. However, existing dehazing methods using vanilla convolution only extract features in the temporal domain and lack the ability to capture multi-directional information. To address the aforementioned issues, we design a new full spectral attention-based detail enhancement dehazing network, named SAD-Net. SAD-Net adopts a U-Net-like structure and integrates Spectral Detail Enhancement Convolution (SDEC) and Frequency-Guided Attention (FGA). SDEC combines wavelet transform and difference convolution(DC) to enhance high-frequency features while preserving low-frequency information. FGA detects haze-induced discrepancies and fine-tunes feature modulation. Experimental results show that SAD-Net outperforms six other dehazing networks on the Dense-Haze, NH-Haze, RESIDE and I-Haze datasets. Specifically, it increases the peak signal-to-noise ratio (PSNR) to 17.16 dB on the Dense-Haze dataset, surpassing the current state-of-the-art (SOTA) methods. Additionally, SAD-Net achieves excellent dehazing performance on an external dataset without any prior training.
format Article
id doaj-art-4d10c3eea3444dc5ad224b1402c25a4d
institution DOAJ
issn 2045-2322
language English
publishDate 2025-04-01
publisher Nature Portfolio
record_format Article
series Scientific Reports
spelling doaj-art-4d10c3eea3444dc5ad224b1402c25a4d2025-08-20T03:06:50ZengNature PortfolioScientific Reports2045-23222025-04-0115111310.1038/s41598-025-92061-1SAD-Net: a full spectral self-attention detail enhancement network for single image dehazingQingjun Niu0Kun Wu1Jialu Zhang2Zhenqi Han3Lizhuang Liu4Shanghai Advanced Research Institute, Chinese Academy of SciencesShanghai Advanced Research Institute, Chinese Academy of SciencesShanghai Advanced Research Institute, Chinese Academy of SciencesShanghai Advanced Research Institute, Chinese Academy of SciencesShanghai Advanced Research Institute, Chinese Academy of SciencesAbstract Single-image dehazing technology plays a significant role in video surveillance and intelligent transportation. However, existing dehazing methods using vanilla convolution only extract features in the temporal domain and lack the ability to capture multi-directional information. To address the aforementioned issues, we design a new full spectral attention-based detail enhancement dehazing network, named SAD-Net. SAD-Net adopts a U-Net-like structure and integrates Spectral Detail Enhancement Convolution (SDEC) and Frequency-Guided Attention (FGA). SDEC combines wavelet transform and difference convolution(DC) to enhance high-frequency features while preserving low-frequency information. FGA detects haze-induced discrepancies and fine-tunes feature modulation. Experimental results show that SAD-Net outperforms six other dehazing networks on the Dense-Haze, NH-Haze, RESIDE and I-Haze datasets. Specifically, it increases the peak signal-to-noise ratio (PSNR) to 17.16 dB on the Dense-Haze dataset, surpassing the current state-of-the-art (SOTA) methods. Additionally, SAD-Net achieves excellent dehazing performance on an external dataset without any prior training.https://doi.org/10.1038/s41598-025-92061-1
spellingShingle Qingjun Niu
Kun Wu
Jialu Zhang
Zhenqi Han
Lizhuang Liu
SAD-Net: a full spectral self-attention detail enhancement network for single image dehazing
Scientific Reports
title SAD-Net: a full spectral self-attention detail enhancement network for single image dehazing
title_full SAD-Net: a full spectral self-attention detail enhancement network for single image dehazing
title_fullStr SAD-Net: a full spectral self-attention detail enhancement network for single image dehazing
title_full_unstemmed SAD-Net: a full spectral self-attention detail enhancement network for single image dehazing
title_short SAD-Net: a full spectral self-attention detail enhancement network for single image dehazing
title_sort sad net a full spectral self attention detail enhancement network for single image dehazing
url https://doi.org/10.1038/s41598-025-92061-1
work_keys_str_mv AT qingjunniu sadnetafullspectralselfattentiondetailenhancementnetworkforsingleimagedehazing
AT kunwu sadnetafullspectralselfattentiondetailenhancementnetworkforsingleimagedehazing
AT jialuzhang sadnetafullspectralselfattentiondetailenhancementnetworkforsingleimagedehazing
AT zhenqihan sadnetafullspectralselfattentiondetailenhancementnetworkforsingleimagedehazing
AT lizhuangliu sadnetafullspectralselfattentiondetailenhancementnetworkforsingleimagedehazing