Drainage Pipeline Multi-Defect Segmentation Assisted by Multiple Attention for Sonar Images

Drainage pipeline construction projects are vulnerable to a range of defects, such as branch concealed joints, variable diameter, two pipe mouth significances, foreign object insertion, pipeline rupture, and pipeline end disconnection, generated during long-term service in a complex environment. Thi...

Full description

Saved in:
Bibliographic Details
Main Authors: Qilin Jin, Qingbang Han, Jianhua Qian, Liujia Sun, Kao Ge, Jiayu Xia
Format: Article
Language:English
Published: MDPI AG 2025-01-01
Series:Applied Sciences
Subjects:
Online Access:https://www.mdpi.com/2076-3417/15/2/597
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Drainage pipeline construction projects are vulnerable to a range of defects, such as branch concealed joints, variable diameter, two pipe mouth significances, foreign object insertion, pipeline rupture, and pipeline end disconnection, generated during long-term service in a complex environment. This paper proposes two enhancements to multiple attention learning to detect and segment multiple defects. Firstly, we collected numerous samples of drainage pipeline sonar defect videos. Then, our multiple attention segmentation network was used for target segmentation. The test precision and accuracy of MAP@50 reach 96.0% and 90.9%, respectively, in the segmentation prediction. Compared to the coordinate attention and convolutional block attention module attention models, it had a significant precision advantage, and the weight file size is merely 7.0 MB, which is far smaller than the Yolov9 model segmentation weight size. The multiple attention method proposed in this paper was adopted for detection, instance segmentation, and pose detection in different public datasets, especially in the object detection of the coco128-seg dataset under the same condition. Map@50:95 has increased by 13.0% assisted by our multiple attention mechanism. The results indicated the memory efficiency and high precision of the integration of the multiple attention model on several public datasets.
ISSN:2076-3417