Unsupervised Domain Adaptation Method Based on Relative Entropy Regularization and Measure Propagation
This paper presents a novel unsupervised domain adaptation (UDA) framework that integrates information-theoretic principles to mitigate distributional discrepancies between source and target domains. The proposed method incorporates two key components: (1) relative entropy regularization, which leve...
Saved in:
| Main Authors: | , , , , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
MDPI AG
2025-04-01
|
| Series: | Entropy |
| Subjects: | |
| Online Access: | https://www.mdpi.com/1099-4300/27/4/426 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1849713701499699200 |
|---|---|
| author | Lianghao Tan Zhuo Peng Yongjia Song Xiaoyi Liu Huangqi Jiang Shubing Liu Weixi Wu Zhiyuan Xiang |
| author_facet | Lianghao Tan Zhuo Peng Yongjia Song Xiaoyi Liu Huangqi Jiang Shubing Liu Weixi Wu Zhiyuan Xiang |
| author_sort | Lianghao Tan |
| collection | DOAJ |
| description | This paper presents a novel unsupervised domain adaptation (UDA) framework that integrates information-theoretic principles to mitigate distributional discrepancies between source and target domains. The proposed method incorporates two key components: (1) relative entropy regularization, which leverages Kullback–Leibler (KL) divergence to align the predicted label distribution of the target domain with a reference distribution derived from the source domain, thereby reducing prediction uncertainty; and (2) measure propagation, a technique that transfers probability mass from the source domain to generate pseudo-measures—estimated probabilistic representations—for the unlabeled target domain. This dual mechanism enhances both global feature alignment and semantic consistency across domains. Extensive experiments on benchmark datasets (OfficeHome and DomainNet) demonstrate that the proposed approach consistently outperforms State-of-the-Art methods, particularly in scenarios with significant domain shifts. These results confirm the robustness, scalability, and theoretical grounding of our framework, offering a new perspective on the fusion of information theory and domain adaptation. |
| format | Article |
| id | doaj-art-845dddcdb493489186fdb591adc82f2f |
| institution | DOAJ |
| issn | 1099-4300 |
| language | English |
| publishDate | 2025-04-01 |
| publisher | MDPI AG |
| record_format | Article |
| series | Entropy |
| spelling | doaj-art-845dddcdb493489186fdb591adc82f2f2025-08-20T03:13:54ZengMDPI AGEntropy1099-43002025-04-0127442610.3390/e27040426Unsupervised Domain Adaptation Method Based on Relative Entropy Regularization and Measure PropagationLianghao Tan0Zhuo Peng1Yongjia Song2Xiaoyi Liu3Huangqi Jiang4Shubing Liu5Weixi Wu6Zhiyuan Xiang7Department of Computer Science, Arizona State University, Tempe, AZ 85281, USADepartment of Computer Science, Arizona State University, Tempe, AZ 85281, USADepartment of Language Science, University of California, Irvine, CA 92697, USADepartment of Computer Science, Arizona State University, Tempe, AZ 85281, USADepartment of Computer Science, Georgia Institute of Technology, Atlanta, GA 30332, USADepartment of Computer Science, North Carolina at Chapel Hill, Orange, GA 27599, USADepartment of Computer Science, New York University, Brooklyn, NY 10003, USADepartment of Computer Science, University of California, San Diego, CA 92093, USAThis paper presents a novel unsupervised domain adaptation (UDA) framework that integrates information-theoretic principles to mitigate distributional discrepancies between source and target domains. The proposed method incorporates two key components: (1) relative entropy regularization, which leverages Kullback–Leibler (KL) divergence to align the predicted label distribution of the target domain with a reference distribution derived from the source domain, thereby reducing prediction uncertainty; and (2) measure propagation, a technique that transfers probability mass from the source domain to generate pseudo-measures—estimated probabilistic representations—for the unlabeled target domain. This dual mechanism enhances both global feature alignment and semantic consistency across domains. Extensive experiments on benchmark datasets (OfficeHome and DomainNet) demonstrate that the proposed approach consistently outperforms State-of-the-Art methods, particularly in scenarios with significant domain shifts. These results confirm the robustness, scalability, and theoretical grounding of our framework, offering a new perspective on the fusion of information theory and domain adaptation.https://www.mdpi.com/1099-4300/27/4/426unsupervised domain adaptationinformation theoryrelative entropy regularizationprobability measure |
| spellingShingle | Lianghao Tan Zhuo Peng Yongjia Song Xiaoyi Liu Huangqi Jiang Shubing Liu Weixi Wu Zhiyuan Xiang Unsupervised Domain Adaptation Method Based on Relative Entropy Regularization and Measure Propagation Entropy unsupervised domain adaptation information theory relative entropy regularization probability measure |
| title | Unsupervised Domain Adaptation Method Based on Relative Entropy Regularization and Measure Propagation |
| title_full | Unsupervised Domain Adaptation Method Based on Relative Entropy Regularization and Measure Propagation |
| title_fullStr | Unsupervised Domain Adaptation Method Based on Relative Entropy Regularization and Measure Propagation |
| title_full_unstemmed | Unsupervised Domain Adaptation Method Based on Relative Entropy Regularization and Measure Propagation |
| title_short | Unsupervised Domain Adaptation Method Based on Relative Entropy Regularization and Measure Propagation |
| title_sort | unsupervised domain adaptation method based on relative entropy regularization and measure propagation |
| topic | unsupervised domain adaptation information theory relative entropy regularization probability measure |
| url | https://www.mdpi.com/1099-4300/27/4/426 |
| work_keys_str_mv | AT lianghaotan unsuperviseddomainadaptationmethodbasedonrelativeentropyregularizationandmeasurepropagation AT zhuopeng unsuperviseddomainadaptationmethodbasedonrelativeentropyregularizationandmeasurepropagation AT yongjiasong unsuperviseddomainadaptationmethodbasedonrelativeentropyregularizationandmeasurepropagation AT xiaoyiliu unsuperviseddomainadaptationmethodbasedonrelativeentropyregularizationandmeasurepropagation AT huangqijiang unsuperviseddomainadaptationmethodbasedonrelativeentropyregularizationandmeasurepropagation AT shubingliu unsuperviseddomainadaptationmethodbasedonrelativeentropyregularizationandmeasurepropagation AT weixiwu unsuperviseddomainadaptationmethodbasedonrelativeentropyregularizationandmeasurepropagation AT zhiyuanxiang unsuperviseddomainadaptationmethodbasedonrelativeentropyregularizationandmeasurepropagation |