Enhancing efficient deep learning models with multimodal, multi-teacher insights for medical image segmentation
Abstract The rapid evolution of deep learning has dramatically enhanced the field of medical image segmentation, leading to the development of models with unprecedented accuracy in analyzing complex medical images. Deep learning-based segmentation holds significant promise for advancing clinical car...
Saved in:
| Main Authors: | , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Nature Portfolio
2025-05-01
|
| Series: | Scientific Reports |
| Online Access: | https://doi.org/10.1038/s41598-025-91430-0 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1850277686724788224 |
|---|---|
| author | Khondker Fariha Hossain Sharif Amit Kamran Joshua Ong Alireza Tavakkoli |
| author_facet | Khondker Fariha Hossain Sharif Amit Kamran Joshua Ong Alireza Tavakkoli |
| author_sort | Khondker Fariha Hossain |
| collection | DOAJ |
| description | Abstract The rapid evolution of deep learning has dramatically enhanced the field of medical image segmentation, leading to the development of models with unprecedented accuracy in analyzing complex medical images. Deep learning-based segmentation holds significant promise for advancing clinical care and enhancing the precision of medical interventions. However, these models’ high computational demand and complexity present significant barriers to their application in resource-constrained clinical settings. To address this challenge, we introduce Teach-Former, a novel knowledge distillation (KD) framework that leverages a Transformer backbone to effectively condense the knowledge of multiple teacher models into a single, streamlined student model. Moreover, it excels in the contextual and spatial interpretation of relationships across multimodal images for more accurate and precise segmentation. Teach-Former stands out by harnessing multimodal inputs (CT, PET, MRI) and distilling the final predictions and the intermediate attention maps, ensuring a richer spatial and contextual knowledge transfer. Through this technique, the student model inherits the capacity for fine segmentation while operating with a significantly reduced parameter set and computational footprint. Additionally, introducing a novel training strategy optimizes knowledge transfer, ensuring the student model captures the intricate mapping of features essential for high-fidelity segmentation. The efficacy of Teach-Former has been effectively tested on two extensive multimodal datasets, HECKTOR21 and PI-CAI22, encompassing various image types. The results demonstrate that our KD strategy reduces the model complexity and surpasses existing state-of-the-art methods to achieve superior performance. The findings of this study indicate that the proposed methodology could facilitate efficient segmentation of complex multimodal medical images, supporting clinicians in achieving more precise diagnoses and comprehensive monitoring of pathological conditions ( https://github.com/FarihaHossain/TeachFormer ). |
| format | Article |
| id | doaj-art-edef393e7b7f47fa82b2b66f30e9f88f |
| institution | OA Journals |
| issn | 2045-2322 |
| language | English |
| publishDate | 2025-05-01 |
| publisher | Nature Portfolio |
| record_format | Article |
| series | Scientific Reports |
| spelling | doaj-art-edef393e7b7f47fa82b2b66f30e9f88f2025-08-20T01:49:47ZengNature PortfolioScientific Reports2045-23222025-05-0115111210.1038/s41598-025-91430-0Enhancing efficient deep learning models with multimodal, multi-teacher insights for medical image segmentationKhondker Fariha Hossain0Sharif Amit Kamran1Joshua Ong2Alireza Tavakkoli3Department of Computer Science and Engineering, University of Nevada, RenoDepartment of Computer Science and Engineering, University of Nevada, RenoDepartment of Ophthalmology and Visual Sciences, University of Michigan Kellogg Eye CenterDepartment of Computer Science and Engineering, University of Nevada, RenoAbstract The rapid evolution of deep learning has dramatically enhanced the field of medical image segmentation, leading to the development of models with unprecedented accuracy in analyzing complex medical images. Deep learning-based segmentation holds significant promise for advancing clinical care and enhancing the precision of medical interventions. However, these models’ high computational demand and complexity present significant barriers to their application in resource-constrained clinical settings. To address this challenge, we introduce Teach-Former, a novel knowledge distillation (KD) framework that leverages a Transformer backbone to effectively condense the knowledge of multiple teacher models into a single, streamlined student model. Moreover, it excels in the contextual and spatial interpretation of relationships across multimodal images for more accurate and precise segmentation. Teach-Former stands out by harnessing multimodal inputs (CT, PET, MRI) and distilling the final predictions and the intermediate attention maps, ensuring a richer spatial and contextual knowledge transfer. Through this technique, the student model inherits the capacity for fine segmentation while operating with a significantly reduced parameter set and computational footprint. Additionally, introducing a novel training strategy optimizes knowledge transfer, ensuring the student model captures the intricate mapping of features essential for high-fidelity segmentation. The efficacy of Teach-Former has been effectively tested on two extensive multimodal datasets, HECKTOR21 and PI-CAI22, encompassing various image types. The results demonstrate that our KD strategy reduces the model complexity and surpasses existing state-of-the-art methods to achieve superior performance. The findings of this study indicate that the proposed methodology could facilitate efficient segmentation of complex multimodal medical images, supporting clinicians in achieving more precise diagnoses and comprehensive monitoring of pathological conditions ( https://github.com/FarihaHossain/TeachFormer ).https://doi.org/10.1038/s41598-025-91430-0 |
| spellingShingle | Khondker Fariha Hossain Sharif Amit Kamran Joshua Ong Alireza Tavakkoli Enhancing efficient deep learning models with multimodal, multi-teacher insights for medical image segmentation Scientific Reports |
| title | Enhancing efficient deep learning models with multimodal, multi-teacher insights for medical image segmentation |
| title_full | Enhancing efficient deep learning models with multimodal, multi-teacher insights for medical image segmentation |
| title_fullStr | Enhancing efficient deep learning models with multimodal, multi-teacher insights for medical image segmentation |
| title_full_unstemmed | Enhancing efficient deep learning models with multimodal, multi-teacher insights for medical image segmentation |
| title_short | Enhancing efficient deep learning models with multimodal, multi-teacher insights for medical image segmentation |
| title_sort | enhancing efficient deep learning models with multimodal multi teacher insights for medical image segmentation |
| url | https://doi.org/10.1038/s41598-025-91430-0 |
| work_keys_str_mv | AT khondkerfarihahossain enhancingefficientdeeplearningmodelswithmultimodalmultiteacherinsightsformedicalimagesegmentation AT sharifamitkamran enhancingefficientdeeplearningmodelswithmultimodalmultiteacherinsightsformedicalimagesegmentation AT joshuaong enhancingefficientdeeplearningmodelswithmultimodalmultiteacherinsightsformedicalimagesegmentation AT alirezatavakkoli enhancingefficientdeeplearningmodelswithmultimodalmultiteacherinsightsformedicalimagesegmentation |