Bridging the gap: multi-granularity representation learning for text-based vehicle retrieval
Abstract Text-based cross-modal vehicle retrieval has been widely applied in smart city contexts and other scenarios. The objective of this approach is to identify semantically relevant target vehicles in videos using text descriptions, thereby facilitating the analysis of vehicle spatio-temporal tr...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Springer
2024-11-01
|
Series: | Complex & Intelligent Systems |
Subjects: | |
Online Access: | https://doi.org/10.1007/s40747-024-01614-w |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Abstract Text-based cross-modal vehicle retrieval has been widely applied in smart city contexts and other scenarios. The objective of this approach is to identify semantically relevant target vehicles in videos using text descriptions, thereby facilitating the analysis of vehicle spatio-temporal trajectories. Current methodologies predominantly employ a two-tower architecture, where single-granularity features from both visual and textual domains are extracted independently. However, due to the intricate semantic relationships between videos and text, aligning the two modalities effectively using single-granularity feature representation poses a challenge. To address this issue, we introduce a Multi-Granularity Representation Learning model, termed MGRL, tailored for text-based cross-modal vehicle retrieval. Specifically, the model parses information from the two modalities into three hierarchical levels of feature representation: coarse-granularity, medium-granularity, and fine-granularity. Subsequently, a feature adaptive fusion strategy is devised to automatically determine the optimal pooling mechanism. Finally, a multi-granularity contrastive learning approach is implemented to ensure comprehensive semantic coverage, ranging from coarse to fine levels. Experimental outcomes on public benchmarks show that our method achieves up to a 14.56% improvement in text-to-vehicle retrieval performance, as measured by the Mean Reciprocal Rank (MRR) metric, when compared against 10 state-of-the-art baselines and 6 ablation studies. |
---|---|
ISSN: | 2199-4536 2198-6053 |