A Tunnel Lining Line Identification Algorithm Based on Supervised Heatmap
Objective As a critical step in tunnel defect detection and analysis, lining line identification has long faced challenges in analyzing detection data. This study proposes using supervised heatmap algorithms and anti-noise disturbance techniques to recognize keypoints of lining lines, based on the C...
Saved in:
| Main Authors: | , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Editorial Department of Journal of Sichuan University (Engineering Science Edition)
2024-07-01
|
| Series: | 工程科学与技术 |
| Subjects: | |
| Online Access: | http://jsuese.scu.edu.cn/thesisDetails#10.12454/j.jsuese.202201161 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Objective As a critical step in tunnel defect detection and analysis, lining line identification has long faced challenges in analyzing detection data. This study proposes using supervised heatmap algorithms and anti-noise disturbance techniques to recognize keypoints of lining lines, based on the CenterNet algorithm, to overcome the limitations of traditional analysis methods and enhance the accuracy and robustness of the results. Methods The algorithm is divided into two stages: keypoint detection and curve fitting. It includes three improvement methods: grid classification task, peripheral point supervision, and anti-noise disturbance. Initially, during the two-stage training process, the keypoint detection phase aims to improve the CenterNet algorithm’s limited heatmap fitting capability for dense keypoints by incorporating a heatmap grid classification task. The grid sequence, aligned in the vertical (A–scan) direction, is divided equally into several segments. These segments are categorized based on the location of keypoints within the sequence. A transformer is employed to learn the mapping from grid sequences to classification labels through supervised training. This supervision of the heatmap fitting process is based on the classification results. Simultaneously, a certain number of outer-point heatmaps are produced in the initial training rounds, and the model’s learning process is constrained through the positional information between outer points and keypoints. In the fine-tuning stage of curve fitting, Gaussian noise is introduced to the curve, and anti-noise disturbance is applied to counteract image noise interference. Finally, the tunnel lining dataset, divided into training, validation, and test sets, is utilized. The training set contains 3 200 images with 753 562 keypoints and a data size of 12.35 GB. The validation set comprises 600 images with 188 635 keypoints and a data size of 2.88 GB. The test set consists of 999 images, including 236 742 keypoints, and a data size of 3.52 GB. This dataset is test data in the experimental stage to compare the CenterNet, CornerNet, and the proposed algorithm. The effects of the three specific improvement methods, grid classification task, peripheral point supervision, and noise resistance, on lining line recognition are initially verified. Through ablation experiments based on the CenterNet algorithm, the impacts of various improvement measures on the algorithm’s performance are further demonstrated. Results and Discussions The experimental results showed that different algorithms’ lining line recognition performance significantly improves after being supervised by the grid classification task. The curve spacing errors of the recognition results from the CenterNet algorithm, employing the backbone networks ResNet, DLA–34, and Hourglass–104, respectively, are reduced by 0.40, 0.40, and 0.28 pixels. The inference times are 11, 19, and 71 milliseconds, respectively. The CornerNet algorithm reduces errors by 0.34 pixels, with an inference time of 23 milliseconds. The grid classification task exists only during the training process, so it does not impact the inference time. Additionally, using outer points to supervise the heatmap fitting process also improves recognition accuracy. The more supervision training rounds, the better the recognition performance. After incorporating outer-point supervision in the first 10 rounds, the curve spacing errors recognized by different algorithms are 3.56, 3.20, and 2.65 pixels, showing improvement effects of 0.75, 0.75, and 0.73 pixels, respectively. Continuing the training beyond 10 rounds results in limited further improvements. By increasing the number of outer points, significant enhancements are initially observed. The optimal effect occurs when 8~10 outer points are used, with curve spacing errors of 3.48, 3.13, and 2.44 pixels, respectively, and improvement effects of 1.84, 1.88, and 2.08 pixels. As more outer points are added, the effect gradually diminishes. Therefore, using 8~10 outer points to supervise model learning in the first 10 rounds yields the most significant improvements. When noise is introduced early in the curve fitting input, recognition accuracy improves, with optimal noise disturbance intensity at approximately 0.08. The curve spacing errors recognized by different algorithms are 3.59, 3.20, and 2.49 pixels, respectively, with improvement effects of 1.73, 1.81, and 2.03 pixels. As noise intensity increases, the overall recognition effect deteriorates sharply. In the ablation experiment, the CenterNet algorithm with the Hourglass–104 backbone network is visually employed to verify the impact of different improvement measures on algorithm recognition performance. The results showed that the three measures of classification task supervision, outer-point supervised training, and anti-noise disturbance improve recognition accuracy by 0.09, 0.05, and 0.04 pixels, respectively, with memory consumption increases of 69, 19, and 6 MB. When classification task supervision and outer-point supervision training are employed simultaneously, the result is improved by 0.21 pixels, with a memory consumption increase of 88 MB. Similarly, when classification task supervision and noise disturbance are combined, the result improves by 0.16 pixels, and memory consumption increases by 77 MB. When outer-point supervised training and anti-noise disturbance are used together, the result improves by 0.11 pixels, and memory consumption rises by 32 MB. Implementing all three measures simultaneously effectively improves accuracy by 0.28 pixels and increases memory consumption by 95 MB. Conclusions The results showed that the grid classification task, outer-point supervision, and anti-noise disturbance proposed in this study can effectively enhance the effectiveness of tunnel lining line identification and mitigate the challenge of detecting dense keypoints. The proposed algorithm can provide technical support for interpreting ground-penetrating radar non-destructive testing data in engineering construction. |
|---|---|
| ISSN: | 2096-3246 |