Reducing Cross-Sensor Domain Gaps in Tactile Sensing via Few-Sample-Driven Style-to-Content Unsupervised Domain Adaptation
Transferring knowledge learned from standard GelSight sensors to other visuotactile sensors is appealing for reducing data collection and annotation. However, such cross-sensor transfer is challenging due to the differences between sensors in internal light sources, imaging effects, and elastomer pr...
Saved in:
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2025-01-01
|
Series: | Sensors |
Subjects: | |
Online Access: | https://www.mdpi.com/1424-8220/25/1/256 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1841548959459311616 |
---|---|
author | Xingshuo Jing Kun Qian |
author_facet | Xingshuo Jing Kun Qian |
author_sort | Xingshuo Jing |
collection | DOAJ |
description | Transferring knowledge learned from standard GelSight sensors to other visuotactile sensors is appealing for reducing data collection and annotation. However, such cross-sensor transfer is challenging due to the differences between sensors in internal light sources, imaging effects, and elastomer properties. By understanding the data collected from each type of visuotactile sensors as domains, we propose a few-sample-driven style-to-content unsupervised domain adaptation method to reduce cross-sensor domain gaps. We first propose a Global and Local Aggregation Bottleneck (GLAB) layer to compress features extracted by an encoder, enabling the extraction of features containing key information and facilitating unlabeled few-sample-driven learning. We introduce a Fourier-style transformation (FST) module and a prototype-constrained learning loss to promote global conditional domain-adversarial adaptation, bridging style-level gaps. We also propose a high-confidence guided teacher–student network, utilizing a self-distillation mechanism to further reduce content-level gaps between the two domains. Experiments on three cross-sensor domain adaptation and real-world robotic cross-sensor shape recognition tasks demonstrate that our method outperforms state-of-the-art approaches, particularly achieving 89.8% accuracy on the DIGIT recognition dataset. |
format | Article |
id | doaj-art-fa1476042c9447f38fe55d49ba2e4a5c |
institution | Kabale University |
issn | 1424-8220 |
language | English |
publishDate | 2025-01-01 |
publisher | MDPI AG |
record_format | Article |
series | Sensors |
spelling | doaj-art-fa1476042c9447f38fe55d49ba2e4a5c2025-01-10T13:21:22ZengMDPI AGSensors1424-82202025-01-0125125610.3390/s25010256Reducing Cross-Sensor Domain Gaps in Tactile Sensing via Few-Sample-Driven Style-to-Content Unsupervised Domain AdaptationXingshuo Jing0Kun Qian1School of Automation, Southeast University, Nanjing 210096, ChinaSchool of Automation, Southeast University, Nanjing 210096, ChinaTransferring knowledge learned from standard GelSight sensors to other visuotactile sensors is appealing for reducing data collection and annotation. However, such cross-sensor transfer is challenging due to the differences between sensors in internal light sources, imaging effects, and elastomer properties. By understanding the data collected from each type of visuotactile sensors as domains, we propose a few-sample-driven style-to-content unsupervised domain adaptation method to reduce cross-sensor domain gaps. We first propose a Global and Local Aggregation Bottleneck (GLAB) layer to compress features extracted by an encoder, enabling the extraction of features containing key information and facilitating unlabeled few-sample-driven learning. We introduce a Fourier-style transformation (FST) module and a prototype-constrained learning loss to promote global conditional domain-adversarial adaptation, bridging style-level gaps. We also propose a high-confidence guided teacher–student network, utilizing a self-distillation mechanism to further reduce content-level gaps between the two domains. Experiments on three cross-sensor domain adaptation and real-world robotic cross-sensor shape recognition tasks demonstrate that our method outperforms state-of-the-art approaches, particularly achieving 89.8% accuracy on the DIGIT recognition dataset.https://www.mdpi.com/1424-8220/25/1/256cross-sensor domain gapstactile sensingunsupervised domain adaptationstyle to content |
spellingShingle | Xingshuo Jing Kun Qian Reducing Cross-Sensor Domain Gaps in Tactile Sensing via Few-Sample-Driven Style-to-Content Unsupervised Domain Adaptation Sensors cross-sensor domain gaps tactile sensing unsupervised domain adaptation style to content |
title | Reducing Cross-Sensor Domain Gaps in Tactile Sensing via Few-Sample-Driven Style-to-Content Unsupervised Domain Adaptation |
title_full | Reducing Cross-Sensor Domain Gaps in Tactile Sensing via Few-Sample-Driven Style-to-Content Unsupervised Domain Adaptation |
title_fullStr | Reducing Cross-Sensor Domain Gaps in Tactile Sensing via Few-Sample-Driven Style-to-Content Unsupervised Domain Adaptation |
title_full_unstemmed | Reducing Cross-Sensor Domain Gaps in Tactile Sensing via Few-Sample-Driven Style-to-Content Unsupervised Domain Adaptation |
title_short | Reducing Cross-Sensor Domain Gaps in Tactile Sensing via Few-Sample-Driven Style-to-Content Unsupervised Domain Adaptation |
title_sort | reducing cross sensor domain gaps in tactile sensing via few sample driven style to content unsupervised domain adaptation |
topic | cross-sensor domain gaps tactile sensing unsupervised domain adaptation style to content |
url | https://www.mdpi.com/1424-8220/25/1/256 |
work_keys_str_mv | AT xingshuojing reducingcrosssensordomaingapsintactilesensingviafewsampledrivenstyletocontentunsuperviseddomainadaptation AT kunqian reducingcrosssensordomaingapsintactilesensingviafewsampledrivenstyletocontentunsuperviseddomainadaptation |