A Unified Deep-Domain Adaptation Framework: Advancing Feature Separability and Local Alignment

In transfer learning, domain adaptation is one of the key research areas. For domain adaptation, domain shift is a known problem when the data distribution of the source domain, from which the training data is fetched, and the target domain, from which the test data is fetched, vary significantly. A...

Full description

Saved in:
Bibliographic Details
Main Authors: Pranav Kumar, Jimson Mathew, Rakesh Kumar Sanodiya, Avinash Kumar Chouhan, Rahul Reddy Bukkasamudram, Chandra Sai Teja Adhikarla
Format: Article
Language:English
Published: MDPI AG 2025-06-01
Series:Sensors
Subjects:
Online Access:https://www.mdpi.com/1424-8220/25/12/3671
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In transfer learning, domain adaptation is one of the key research areas. For domain adaptation, domain shift is a known problem when the data distribution of the source domain, from which the training data is fetched, and the target domain, from which the test data is fetched, vary significantly. Aligning the source and target domains is a solution, but due to alignment, the intrinsic properties of the data may be altered. To address this issue of domain shift, we introduce a novel method, called “A Unified Deep-Domain Adaptation Framework: Advancing Feature Separability and Local Alignment” (DDASLA) that incorporates an attention mechanism into the ResNet18 model to improve its feature extraction capability. Apart from self-attention, a combined loss function consisting of angular loss, Local Maximum Mean Discrepancy (LMMD), and entropy minimization is used. Angular loss enhances feature discrimination through angular alignment, whereas LMMD equalizes local data distributions across domains, and entropy minimization refines the decision boundaries. A comprehensive experiment on the Office and remote sensing datasets shows that DDASLA outperforms several state-of-the-art methods. These findings show that DDASLA improves model generalization and robustness across domains, paving the way for future domain adaptation research.
ISSN:1424-8220