Research on Inspection Method of Intelligent Factory Inspection Robot for Personnel Safety Protection

To address the issues of low efficiency and high omission rates in monitoring workers’ compliance with safety dress codes in intelligent factories, this paper proposes the SFA-YOLO network, an enhanced real-time detection model based on a Selective Feature Attention mechanism. This model enables ins...

Full description

Saved in:
Bibliographic Details
Main Authors: Ruohuai Sun, Bin Zhao, Chengdong Wu, Xiaohong Qin
Format: Article
Language:English
Published: MDPI AG 2025-05-01
Series:Applied Sciences
Subjects:
Online Access:https://www.mdpi.com/2076-3417/15/10/5750
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:To address the issues of low efficiency and high omission rates in monitoring workers’ compliance with safety dress codes in intelligent factories, this paper proposes the SFA-YOLO network, an enhanced real-time detection model based on a Selective Feature Attention mechanism. This model enables inspection robots to automatically and accurately identify whether the workers’ attire meets the safety standards. First, this paper constructs a comprehensive dataset of safety attire, including images captured under various scenarios, personnel numbers, and operational conditions. All images are manually annotated to enhance the model’s generalization capability. The dataset contains 3966 images, covering four classes: vest, no-vest, helmet, and no-helmet. Second, the proposed model integrates the SFA mechanism to improve the YOLO architecture. This mechanism combines multi-scale feature fusion with a gated feature extraction module to improve detection accuracy, strengthening the model’s ability to detect occluded targets, partial images, and small objects. Additionally, a lightweight network structure is adopted to meet the inference speed requirements of real-time monitoring. The experimental results demonstrate that the SFA-YOLO model achieves a detection precision of 89.3% and a frame rate of 149 FPS in the safety attire detection task, effectively balancing precision and real-time performance. Compared to YOLOv5n, the proposed model achieves a 5.2% improvement in precision, an 11.5% increase in recall, a 13.1% gain in mAP@0.5, and a 12.5% improvement in mAP@0.5:0.95. Furthermore, the generalization experiment confirms the model’s robustness in various task environments. Compared with conventional YOLO models, the proposed method performs more stably in safety attire detection, offering a reliable technical foundation for safety management in intelligent factories.
ISSN:2076-3417