Probability Density Function Distance-Based Augmented CycleGAN for Image Domain Translation with Asymmetric Sample Size
Many image-to-image translation tasks face an inherent problem of asymmetry in the domains, meaning that one of the domains is scarce—i.e., it contains significantly less available training data in comparison to the other domain. There are only a few methods proposed in the literature that tackle th...
Saved in:
| Main Authors: | , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
MDPI AG
2025-04-01
|
| Series: | Mathematics |
| Subjects: | |
| Online Access: | https://www.mdpi.com/2227-7390/13/9/1406 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Many image-to-image translation tasks face an inherent problem of asymmetry in the domains, meaning that one of the domains is scarce—i.e., it contains significantly less available training data in comparison to the other domain. There are only a few methods proposed in the literature that tackle the problem of training a CycleGAN in such an environment. In this paper, we propose a novel method that utilizes pdf (probability density function) distance-based augmentation of the discriminator network corresponding to the scarce domain. Namely, the method involves adding examples translated from the non-scarce domain into the pool of the discriminator corresponding to the scarce domain, but only those examples for which the assumed Gaussian pdf in VGG19 net feature space is sufficiently close to the GMM pdf that represents the relevant initial pool in the same feature space. In experiments on several datasets, the proposed method showed significantly improved characteristics in comparison with a standard unsupervised CycleGAN, as well as with Bootstraped SSL CycleGAN, where translated examples are added to the pool of the discriminator corresponding to the scarce domain, without any discrimination. Moreover, in the considered scarce scenarios, it also shows competitive results in comparison to fully supervised image-to-image translation based on the pix2pix method. |
|---|---|
| ISSN: | 2227-7390 |