Multi-View Prototypical Transport for Unsupervised Domain Adaptation

Unsupervised Domain Adaptation (UDA) methods struggle to bridge the gap between a labeled source domain and an unlabeled target domain, particularly due to the rigidity of deep feature representations derived from the penultimate layer of backbone feature extractors. These deeper representations, wh...

Full description

Saved in:
Bibliographic Details
Main Authors: Sunhyeok Lee, Dae-Shik Kim
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10836683/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Unsupervised Domain Adaptation (UDA) methods struggle to bridge the gap between a labeled source domain and an unlabeled target domain, particularly due to the rigidity of deep feature representations derived from the penultimate layer of backbone feature extractors. These deeper representations, while discriminative, often fail to generalize under distributional shifts due to their specificity. To overcome these limitations, we introduce a novel representation learning framework, Multi-view Prototypical Transport (MPT), which leverages a multi-view hypothesis model to integrate and utilize the general information present in shallower layers. This approach facilitates a more comprehensive understanding of the relationships among intermediate features. Additionally, our framework incorporates a novel multi-view prototypical learning strategy that not only transfers domain-general representations, but also significantly enhances robustness against target domain outliers. Extensive experimental evaluations on various benchmark datasets demonstrate that our method outperforms existing state-of-the-art UDA approaches, confirming the effectiveness of our strategy in adapting to complex domain shifts.
ISSN:2169-3536