Complementarity-Oriented Feature Fusion for Face-Phone Trajectory Matching

CCTVs and telecom base stations act as sensors, and collect massive face and phone related data. When used for person localization and trajectory characterization, they each present quite different spatiotemporal characteristics: CCTV is associated with slowly sampled face ID trajectories with spati...

Full description

Saved in:
Bibliographic Details
Main Authors: Changfeng Cao, Wenchuan Zhang, Hua Yang, Dan Ruan
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10844270/
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1832576775723417600
author Changfeng Cao
Wenchuan Zhang
Hua Yang
Dan Ruan
author_facet Changfeng Cao
Wenchuan Zhang
Hua Yang
Dan Ruan
author_sort Changfeng Cao
collection DOAJ
description CCTVs and telecom base stations act as sensors, and collect massive face and phone related data. When used for person localization and trajectory characterization, they each present quite different spatiotemporal characteristics: CCTV is associated with slowly sampled face ID trajectories with spatial resolution of approximately 20 meters, while telecom readings provide fast sampled phone ID trajectories with spatial uncertainty of a few hundred meters. The face or phone trajectory can be seen as an observation of the real trajectory of a moving pedestrian. It is useful to identify the correspondence between face and phone trajectories to reconstruct the trajectory of moving persons. To this end, we propose a complementarity-oriented feature fusion mechanism (COFFM) to model and utilize the common embedding and complementarity of these two measurement modalities. Specifically, a Cycle Heterogeneous Trajectory Translation Network (CCTTN) is proposed to realize a TFE (Trajectory Feature Extractor) which captures the latent transforming relationships between the face and phone modalities. The latent features from both transforming directions are concatenated in the Feature Unifying (FU) module and fed into a binary face-phone trajectory matching discriminator (FPTPMD) to infer whether a face-phone trajectory pair corresponds to the same underlying motion trajectory. We evaluated our method on a large real-world face-phone trajectory dataset and showed promising results with the accuracy of 97.1% which exceeds the comparable similarity-based methods. The developed principle and framework generalize well to other multi-modality trajectory matching tasks.
format Article
id doaj-art-a79700305687475d99fa3dc313e055f7
institution Kabale University
issn 2169-3536
language English
publishDate 2025-01-01
publisher IEEE
record_format Article
series IEEE Access
spelling doaj-art-a79700305687475d99fa3dc313e055f72025-01-31T00:00:46ZengIEEEIEEE Access2169-35362025-01-0113179111791810.1109/ACCESS.2025.353110610844270Complementarity-Oriented Feature Fusion for Face-Phone Trajectory MatchingChangfeng Cao0https://orcid.org/0009-0005-4927-1850Wenchuan Zhang1https://orcid.org/0009-0002-2334-081XHua Yang2https://orcid.org/0000-0003-2179-5343Dan Ruan3https://orcid.org/0000-0003-3400-7684Department of Big Data and Computer Science, Guizhou Normal University, Guiyang, Guizhou, ChinaDepartment of Big Data and Computer Science, Guizhou Normal University, Guiyang, Guizhou, ChinaDepartment of Big Data and Computer Science, Guizhou Normal University, Guiyang, Guizhou, ChinaDepartment of Radiation Oncology, David Geffen School, University of California at Los Angeles, Los Angeles, CA, USACCTVs and telecom base stations act as sensors, and collect massive face and phone related data. When used for person localization and trajectory characterization, they each present quite different spatiotemporal characteristics: CCTV is associated with slowly sampled face ID trajectories with spatial resolution of approximately 20 meters, while telecom readings provide fast sampled phone ID trajectories with spatial uncertainty of a few hundred meters. The face or phone trajectory can be seen as an observation of the real trajectory of a moving pedestrian. It is useful to identify the correspondence between face and phone trajectories to reconstruct the trajectory of moving persons. To this end, we propose a complementarity-oriented feature fusion mechanism (COFFM) to model and utilize the common embedding and complementarity of these two measurement modalities. Specifically, a Cycle Heterogeneous Trajectory Translation Network (CCTTN) is proposed to realize a TFE (Trajectory Feature Extractor) which captures the latent transforming relationships between the face and phone modalities. The latent features from both transforming directions are concatenated in the Feature Unifying (FU) module and fed into a binary face-phone trajectory matching discriminator (FPTPMD) to infer whether a face-phone trajectory pair corresponds to the same underlying motion trajectory. We evaluated our method on a large real-world face-phone trajectory dataset and showed promising results with the accuracy of 97.1% which exceeds the comparable similarity-based methods. The developed principle and framework generalize well to other multi-modality trajectory matching tasks.https://ieeexplore.ieee.org/document/10844270/Multi-modality trajectory matchingfeature fusiontrajectory feature extractioncommon domain embeddingpedestrian trackingtrajectory reconstruction
spellingShingle Changfeng Cao
Wenchuan Zhang
Hua Yang
Dan Ruan
Complementarity-Oriented Feature Fusion for Face-Phone Trajectory Matching
IEEE Access
Multi-modality trajectory matching
feature fusion
trajectory feature extraction
common domain embedding
pedestrian tracking
trajectory reconstruction
title Complementarity-Oriented Feature Fusion for Face-Phone Trajectory Matching
title_full Complementarity-Oriented Feature Fusion for Face-Phone Trajectory Matching
title_fullStr Complementarity-Oriented Feature Fusion for Face-Phone Trajectory Matching
title_full_unstemmed Complementarity-Oriented Feature Fusion for Face-Phone Trajectory Matching
title_short Complementarity-Oriented Feature Fusion for Face-Phone Trajectory Matching
title_sort complementarity oriented feature fusion for face phone trajectory matching
topic Multi-modality trajectory matching
feature fusion
trajectory feature extraction
common domain embedding
pedestrian tracking
trajectory reconstruction
url https://ieeexplore.ieee.org/document/10844270/
work_keys_str_mv AT changfengcao complementarityorientedfeaturefusionforfacephonetrajectorymatching
AT wenchuanzhang complementarityorientedfeaturefusionforfacephonetrajectorymatching
AT huayang complementarityorientedfeaturefusionforfacephonetrajectorymatching
AT danruan complementarityorientedfeaturefusionforfacephonetrajectorymatching