Enhanced object detection in low-visibility haze conditions with YOLOv9s.

Low-visibility haze environments, marked by their inherent low contrast and high brightness, present a formidable challenge to the precision and robustness of conventional object detection algorithms. This paper introduces an enhanced object detection framework for YOLOv9s tailored for low-visibilit...

Full description

Saved in:
Bibliographic Details
Main Authors: Yang Zhang, Bin Zhou, Xue Zhao, Xiaomeng Song
Format: Article
Language:English
Published: Public Library of Science (PLoS) 2025-01-01
Series:PLoS ONE
Online Access:https://doi.org/10.1371/journal.pone.0317852
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Low-visibility haze environments, marked by their inherent low contrast and high brightness, present a formidable challenge to the precision and robustness of conventional object detection algorithms. This paper introduces an enhanced object detection framework for YOLOv9s tailored for low-visibility haze conditions, capitalizing on the merits of contrastive learning for optimizing local feature details, as well as the benefits of multiscale attention mechanisms and dynamic focusing mechanisms for achieving real-time global quality optimization. Specifically, the framework incorporates Patchwise Contrastive Learning to fortify the correlation among positive samples within image patches, effectively reducing negative sample interference and enhancing the model's capability to discern subtle local features of haze-impacted images. Additionally, the integration of Efficient Multi-Scale Attention and the Wise-IoU Dynamic Focusing Mechanism enhances the algorithm's sensitivity to channel, spatial orientation, and locational information. Furthermore, the implementation of a nonmonotonic strategy for dynamically adjusting the loss function weights significantly boosts the model's detection precision and training efficiency. Comprehensive experimental evaluations of the COCO2017 fog-augmented dataset indicate that the proposed algorithm surpasses current state-of-the-art techniques in various assessment metrics, including precision, recall, and mean average precision (mAP). Our source code is available at: https://github.com/PaTinLei/EOD.
ISSN:1932-6203