SIDDA: SInkhorn Dynamic Domain Adaptation for image classification with equivariant neural networks
Modern neural networks (NNs) often do not generalize well in the presence of a ‘covariate shift’; that is, in situations where the training and test data distributions differ, but the conditional distribution of classification labels given the data remains unchanged. In such cases, NN generalization...
Saved in:
| Main Authors: | , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IOP Publishing
2025-01-01
|
| Series: | Machine Learning: Science and Technology |
| Subjects: | |
| Online Access: | https://doi.org/10.1088/2632-2153/adf701 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Modern neural networks (NNs) often do not generalize well in the presence of a ‘covariate shift’; that is, in situations where the training and test data distributions differ, but the conditional distribution of classification labels given the data remains unchanged. In such cases, NN generalization can be reduced to a problem of learning more robust, domain-invariant features. Domain adaptation (DA) methods include a broad range of techniques aimed at achieving this; however, these methods have struggled with the need for extensive hyperparameter tuning, which then incurs significant computational costs. In this work, we introduce SInkhorn Dynamic Domain Adaptation (SIDDA), an out-of-the-box DA training algorithm built upon the Sinkhorn divergence, that can achieve effective domain alignment with minimal hyperparameter tuning and computational overhead. We demonstrate the efficacy of our method on multiple simulated and real datasets of varying complexity, including simple shapes, handwritten digits, real astronomical observations, and remote sensing data. These datasets exhibit covariate shifts due to noise, blurring, differences between telescopes, and variations in imaging wavelengths. SIDDA is compatible with a variety of NN architectures, and it works particularly well in improving classification accuracy and model calibration when paired with symmetry-aware equivariant NNs (ENNs). We find that SIDDA consistently enhances the generalization capabilities of NNs, achieving up to a ${\approx}40\%$ improvement in classification accuracy on unlabeled target data, while also providing a more modest performance gain of $\lesssim 1\%$ on labeled source data. We also study the efficacy of DA on ENNs with respect to the varying group orders of the dihedral group D _N , and find that the model performance improves as the degree of equivariance increases. Finally, if SIDDA achieves proper domain alignment, it also enhances model calibration on both source and target data, with the most significant gains in the unlabeled target domain—achieving over an order of magnitude improvement in the expected calibration error and Brier score. SIDDA’s versatility across various NN models and datasets, combined with its automated approach to domain alignment, has the potential to significantly advance multi-dataset studies by enabling the development of highly generalizable models. |
|---|---|
| ISSN: | 2632-2153 |