EFCRFNet: A novel multi-scale framework for salient object detection.

Salient Object Detection (SOD) is a fundamental task in computer vision, aiming to identify prominent regions within images. Traditional methods and deep learning-based models often encounter challenges in capturing crucial information in complex scenes, particularly due to inadequate edge feature e...

Full description

Saved in:
Bibliographic Details
Main Authors: Hong Peng, Yunfei Hu, Baocai Yu, Zhen Zhang
Format: Article
Language:English
Published: Public Library of Science (PLoS) 2025-01-01
Series:PLoS ONE
Online Access:https://doi.org/10.1371/journal.pone.0323757
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Salient Object Detection (SOD) is a fundamental task in computer vision, aiming to identify prominent regions within images. Traditional methods and deep learning-based models often encounter challenges in capturing crucial information in complex scenes, particularly due to inadequate edge feature extraction, which compromises the precise delineation of object contours and boundaries. To address these challenges, we introduce EFCRFNet, a novel multi-scale feature extraction model that incorporates two innovative modules: the Enhanced Conditional Random Field (ECRF) and the Edge Feature Enhancement Module (EFEM). The ECRF module leverages advanced spatial attention mechanisms to enhance multimodal feature fusion, enabling robust detection in complex environments. Concurrently, the EFEM module focuses on refining edge features to strengthen multi-scale feature representation, significantly improving boundary recognition accuracy. Extensive experiments on standard benchmark datasets demonstrate that EFCRFNet achieves notable performance gains across key evaluation metrics, including MAE (0.64%), Fm (1.04%), Em (8.73%), and Sm (7.4%). These results underscore the effectiveness of EFCRFNet in enhancing detection accuracy and optimizing feature fusion, advancing the state of the art in salient object detection.
ISSN:1932-6203