Cross-Domain Person Re-Identification Based on Multi-Branch Pose-Guided Occlusion Generation
Aiming at the problems caused by a lack of feature matching due to occlusion and fixed model parameters in cross-domain person re-identification, a method based on multi-branch pose-guided occlusion generation is proposed. This method can effectively improve the accuracy of person matching and enabl...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2025-01-01
|
Series: | Sensors |
Subjects: | |
Online Access: | https://www.mdpi.com/1424-8220/25/2/473 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Aiming at the problems caused by a lack of feature matching due to occlusion and fixed model parameters in cross-domain person re-identification, a method based on multi-branch pose-guided occlusion generation is proposed. This method can effectively improve the accuracy of person matching and enable identity matching even when pedestrian features are misaligned. Firstly, a novel pose-guided occlusion generation module is designed to enhance the model’s ability to extract discriminative features from non-occluded areas. Occlusion data are generated to simulate occluded person images. This improves the model’s learning ability and addresses the issue of misidentifying occlusion samples. Secondly, a multi-branch feature fusion structure is constructed. By fusing different feature information from the global and occlusion branches, the diversity of features is enriched. This enrichment improves the model’s generalization. Finally, a dynamic convolution kernel is constructed to calculate the similarity between images. This approach achieves effective point-to-point matching and resolves the problem of fixed model parameters. Experimental results indicate that, compared to current mainstream algorithms, this method shows significant advantages in the first hit rate (Rank-1), mean average precision (mAP), and generalization performance. In the MSMT17→DukeMTMC-reID dataset, after re-ranking (Rerank) and time-tift (Tlift) for the two indicators on Market1501, the mAP and Rank-1 reached 80.5%, 84.3%, 81.9%, and 93.1%. Additionally, the algorithm achieved 51.6% and 41.3% on DukeMTMC-reID→Occluded-Duke, demonstrating good recognition performance on the occlusion dataset. |
---|---|
ISSN: | 1424-8220 |