Evaluating masked self-supervised learning frameworks for 3D dental model segmentation tasks

Abstract The application of deep learning using dental models is crucial for automated computer-aided treatment planning. However, developing highly accurate models requires a substantial amount of accurately labeled data. Obtaining this data is challenging, especially in the medical domain. Masked...

Full description

Saved in:
Bibliographic Details
Main Authors: Lucas Krenmayr, Reinhold von Schwerin, Daniel Schaudt, Pascal Riedel, Alexander Hafner, Marc Geserick
Format: Article
Language:English
Published: Nature Portfolio 2025-05-01
Series:Scientific Reports
Online Access:https://doi.org/10.1038/s41598-025-01014-1
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1850273415017005056
author Lucas Krenmayr
Reinhold von Schwerin
Daniel Schaudt
Pascal Riedel
Alexander Hafner
Marc Geserick
author_facet Lucas Krenmayr
Reinhold von Schwerin
Daniel Schaudt
Pascal Riedel
Alexander Hafner
Marc Geserick
author_sort Lucas Krenmayr
collection DOAJ
description Abstract The application of deep learning using dental models is crucial for automated computer-aided treatment planning. However, developing highly accurate models requires a substantial amount of accurately labeled data. Obtaining this data is challenging, especially in the medical domain. Masked self-supervised learning has shown great promise in overcoming the challenge of data scarcity. However, its effectiveness has not been well explored in the 3D domain, particularly on dental models. In this work, we investigate the applicability of the four recently published masked self-supervised learning frameworks-Point-BERT, Point-MAE, Point-GPT, and Point-M2AE-for improving downstream tasks such as tooth and brace segmentation. These frameworks were pre-trained on a proprietary dataset of over 4000 unlabeled 3D dental models and fine-tuned using the publicly available Teeth3DS dataset for tooth segmentation and a self-constructed braces segmentation dataset. Through a set of experiments we demonstrate that pre-training can enhance the performance of downstream tasks, especially when training data is scarce or imbalanced—a critical factor for clinical usability. Our results show that the benefits are most noticeable when training data is limited but diminish as more labeled data becomes available, providing insights into when and how this technique should be applied to maximize its effectiveness.
format Article
id doaj-art-17dd677bb1c44c8f9d1216df83e4c862
institution OA Journals
issn 2045-2322
language English
publishDate 2025-05-01
publisher Nature Portfolio
record_format Article
series Scientific Reports
spelling doaj-art-17dd677bb1c44c8f9d1216df83e4c8622025-08-20T01:51:30ZengNature PortfolioScientific Reports2045-23222025-05-0115111510.1038/s41598-025-01014-1Evaluating masked self-supervised learning frameworks for 3D dental model segmentation tasksLucas Krenmayr0Reinhold von Schwerin1Daniel Schaudt2Pascal Riedel3Alexander Hafner4Marc Geserick5Cooperative Doctoral Program for Data Science and Analytics, University of UlmDepartment of Computer Science, University of Applied SciencesDepartment of Computer Science, University of Applied SciencesDepartment of Computer Science, University of Applied SciencesDepartment of Computer Science, University of Applied Sciencessmyl tp GmbHAbstract The application of deep learning using dental models is crucial for automated computer-aided treatment planning. However, developing highly accurate models requires a substantial amount of accurately labeled data. Obtaining this data is challenging, especially in the medical domain. Masked self-supervised learning has shown great promise in overcoming the challenge of data scarcity. However, its effectiveness has not been well explored in the 3D domain, particularly on dental models. In this work, we investigate the applicability of the four recently published masked self-supervised learning frameworks-Point-BERT, Point-MAE, Point-GPT, and Point-M2AE-for improving downstream tasks such as tooth and brace segmentation. These frameworks were pre-trained on a proprietary dataset of over 4000 unlabeled 3D dental models and fine-tuned using the publicly available Teeth3DS dataset for tooth segmentation and a self-constructed braces segmentation dataset. Through a set of experiments we demonstrate that pre-training can enhance the performance of downstream tasks, especially when training data is scarce or imbalanced—a critical factor for clinical usability. Our results show that the benefits are most noticeable when training data is limited but diminish as more labeled data becomes available, providing insights into when and how this technique should be applied to maximize its effectiveness.https://doi.org/10.1038/s41598-025-01014-1
spellingShingle Lucas Krenmayr
Reinhold von Schwerin
Daniel Schaudt
Pascal Riedel
Alexander Hafner
Marc Geserick
Evaluating masked self-supervised learning frameworks for 3D dental model segmentation tasks
Scientific Reports
title Evaluating masked self-supervised learning frameworks for 3D dental model segmentation tasks
title_full Evaluating masked self-supervised learning frameworks for 3D dental model segmentation tasks
title_fullStr Evaluating masked self-supervised learning frameworks for 3D dental model segmentation tasks
title_full_unstemmed Evaluating masked self-supervised learning frameworks for 3D dental model segmentation tasks
title_short Evaluating masked self-supervised learning frameworks for 3D dental model segmentation tasks
title_sort evaluating masked self supervised learning frameworks for 3d dental model segmentation tasks
url https://doi.org/10.1038/s41598-025-01014-1
work_keys_str_mv AT lucaskrenmayr evaluatingmaskedselfsupervisedlearningframeworksfor3ddentalmodelsegmentationtasks
AT reinholdvonschwerin evaluatingmaskedselfsupervisedlearningframeworksfor3ddentalmodelsegmentationtasks
AT danielschaudt evaluatingmaskedselfsupervisedlearningframeworksfor3ddentalmodelsegmentationtasks
AT pascalriedel evaluatingmaskedselfsupervisedlearningframeworksfor3ddentalmodelsegmentationtasks
AT alexanderhafner evaluatingmaskedselfsupervisedlearningframeworksfor3ddentalmodelsegmentationtasks
AT marcgeserick evaluatingmaskedselfsupervisedlearningframeworksfor3ddentalmodelsegmentationtasks