MRI-based deep learning with clinical and imaging features to differentiate medulloblastoma and ependymoma in children

BackgroundMedulloblastoma (MB) and ependymoma (EM) in children share similarities in terms of age group, tumor location, and clinical presentation, which makes it challenging to clinically diagnose and distinguish them.PurposeThe present study aims to explore the effectiveness of T2-weighted magneti...

Full description

Saved in:
Bibliographic Details
Main Authors: Yasen Yimit, Parhat Yasin, Yue Hao, Abudouresuli Tuersun, Chencui Huang, Xiaoguang Zou, Ya Qiu, Yunling Wang, Mayidili Nijiati
Format: Article
Language:English
Published: Frontiers Media S.A. 2025-04-01
Series:Frontiers in Molecular Biosciences
Subjects:
Online Access:https://www.frontiersin.org/articles/10.3389/fmolb.2025.1570860/full
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1850172976969809920
author Yasen Yimit
Yasen Yimit
Parhat Yasin
Yue Hao
Abudouresuli Tuersun
Abudouresuli Tuersun
Chencui Huang
Xiaoguang Zou
Xiaoguang Zou
Ya Qiu
Ya Qiu
Yunling Wang
Mayidili Nijiati
Mayidili Nijiati
author_facet Yasen Yimit
Yasen Yimit
Parhat Yasin
Yue Hao
Abudouresuli Tuersun
Abudouresuli Tuersun
Chencui Huang
Xiaoguang Zou
Xiaoguang Zou
Ya Qiu
Ya Qiu
Yunling Wang
Mayidili Nijiati
Mayidili Nijiati
author_sort Yasen Yimit
collection DOAJ
description BackgroundMedulloblastoma (MB) and ependymoma (EM) in children share similarities in terms of age group, tumor location, and clinical presentation, which makes it challenging to clinically diagnose and distinguish them.PurposeThe present study aims to explore the effectiveness of T2-weighted magnetic resonance imaging (MRI)-based deep learning (DL) combined with clinical imaging features for differentiating MB from EM.MethodsAxial T2-weighted MRI sequences obtained from 201 patients across three study centers were used for model training and testing. The regions of interest were manually delineated by an experienced neuroradiologist with supervision by a senior radiologist. We developed a DL classifier using a pretrained AlexNet architecture that was fine-tuned on our dataset. To mitigate class imbalance, we implemented data augmentation and employed K-fold cross-validation to enhance model generalizability. For patient classification, we used two voting strategies: hard voting strategy in which the majority prediction was selected from individual image slices; soft voting strategy in which the prediction scores were averaged across slices with a threshold of 0.5. Additionally, a multimodality fusion model was constructed by integrating the DL classifier with clinical and imaging features. The model performance was assessed using a 7:3 random split of the dataset for training and validation, respectively. The key metrics like sensitivity, specificity, positive predictive value, negative predictive value, F1 score, area under the receiver operating characteristic curve (AUC), and accuracy were calculated, and statistical comparisons were performed using the DeLong test. Thereafter, MB was classified as positive, while EM was classified as negative.ResultsThe DL model with the hard voting strategy achieved AUC values of 0.712 (95% confidence interval (CI): 0.625–0.797) on the training set and 0.689 (95% CI: 0.554–0.826) on the test set. In contrast, the multimodality fusion model demonstrated superior performance with AUC values of 0.987 (95% CI: 0.974–0.996) on the training set and 0.889 (95% CI: 0.803–0.949) on the test set. The DeLong test indicated a statistically significant improvement in AUC values for the fusion model compared to the DL model (p < 0.001), highlighting its enhanced discriminative ability.ConclusionT2-weighted MRI-based DL combined with multimodal clinical and imaging features can be used to effectively differentiate MB from EM in children. Thus, the structure of the decision tree in the decision tree classifier is expected to greatly assist clinicians in daily practice.
format Article
id doaj-art-55e1c00c979e43b1b0a60f83cc7dd0c3
institution OA Journals
issn 2296-889X
language English
publishDate 2025-04-01
publisher Frontiers Media S.A.
record_format Article
series Frontiers in Molecular Biosciences
spelling doaj-art-55e1c00c979e43b1b0a60f83cc7dd0c32025-08-20T02:19:57ZengFrontiers Media S.A.Frontiers in Molecular Biosciences2296-889X2025-04-011210.3389/fmolb.2025.15708601570860MRI-based deep learning with clinical and imaging features to differentiate medulloblastoma and ependymoma in childrenYasen Yimit0Yasen Yimit1Parhat Yasin2Yue Hao3Abudouresuli Tuersun4Abudouresuli Tuersun5Chencui Huang6Xiaoguang Zou7Xiaoguang Zou8Ya Qiu9Ya Qiu10Yunling Wang11Mayidili Nijiati12Mayidili Nijiati13Department of Radiology, The First People’s Hospital of Kashi Prefecture, Kashgar, ChinaThe Xinjiang Key Laboratory of Artificial Intelligence Assisted Imaging Diagnosis, Varanasi, ChinaThe Sixth Affiliated Hospital of Xinjiang Medical University Department of Spine Surgery, Urumqi, ChinaDepartment of Radiology, The First People’s Hospital of Kashi Prefecture, Kashgar, ChinaDepartment of Radiology, The First People’s Hospital of Kashi Prefecture, Kashgar, ChinaThe Xinjiang Key Laboratory of Artificial Intelligence Assisted Imaging Diagnosis, Varanasi, ChinaDepartment of Research Collaboration, R&D Center, Beijing Deepwise and League of PHD Technology Co., Ltd., Beijing, ChinaThe Xinjiang Key Laboratory of Artificial Intelligence Assisted Imaging Diagnosis, Varanasi, ChinaClinical Medical Research Center, The First People’s Hospital of Kashi (Kashgar) Prefecture, Kashgar, ChinaDepartment of Radiology, The First People’s Hospital of Kashi Prefecture, Kashgar, ChinaThe Xinjiang Key Laboratory of Artificial Intelligence Assisted Imaging Diagnosis, Varanasi, ChinaFirst Affiliated Hospital of Xinjiang Medical University Department of Imaging Center, Urumqi, ChinaThe Xinjiang Key Laboratory of Artificial Intelligence Assisted Imaging Diagnosis, Varanasi, ChinaDepartment of Radiology, The Fourth Affiliated Hospital of Xinjiang Medical University, Urumqi, ChinaBackgroundMedulloblastoma (MB) and ependymoma (EM) in children share similarities in terms of age group, tumor location, and clinical presentation, which makes it challenging to clinically diagnose and distinguish them.PurposeThe present study aims to explore the effectiveness of T2-weighted magnetic resonance imaging (MRI)-based deep learning (DL) combined with clinical imaging features for differentiating MB from EM.MethodsAxial T2-weighted MRI sequences obtained from 201 patients across three study centers were used for model training and testing. The regions of interest were manually delineated by an experienced neuroradiologist with supervision by a senior radiologist. We developed a DL classifier using a pretrained AlexNet architecture that was fine-tuned on our dataset. To mitigate class imbalance, we implemented data augmentation and employed K-fold cross-validation to enhance model generalizability. For patient classification, we used two voting strategies: hard voting strategy in which the majority prediction was selected from individual image slices; soft voting strategy in which the prediction scores were averaged across slices with a threshold of 0.5. Additionally, a multimodality fusion model was constructed by integrating the DL classifier with clinical and imaging features. The model performance was assessed using a 7:3 random split of the dataset for training and validation, respectively. The key metrics like sensitivity, specificity, positive predictive value, negative predictive value, F1 score, area under the receiver operating characteristic curve (AUC), and accuracy were calculated, and statistical comparisons were performed using the DeLong test. Thereafter, MB was classified as positive, while EM was classified as negative.ResultsThe DL model with the hard voting strategy achieved AUC values of 0.712 (95% confidence interval (CI): 0.625–0.797) on the training set and 0.689 (95% CI: 0.554–0.826) on the test set. In contrast, the multimodality fusion model demonstrated superior performance with AUC values of 0.987 (95% CI: 0.974–0.996) on the training set and 0.889 (95% CI: 0.803–0.949) on the test set. The DeLong test indicated a statistically significant improvement in AUC values for the fusion model compared to the DL model (p < 0.001), highlighting its enhanced discriminative ability.ConclusionT2-weighted MRI-based DL combined with multimodal clinical and imaging features can be used to effectively differentiate MB from EM in children. Thus, the structure of the decision tree in the decision tree classifier is expected to greatly assist clinicians in daily practice.https://www.frontiersin.org/articles/10.3389/fmolb.2025.1570860/fulldeep learningmagnetic resonance imagingmedulloblastomaependymomaT2-weighted imaging
spellingShingle Yasen Yimit
Yasen Yimit
Parhat Yasin
Yue Hao
Abudouresuli Tuersun
Abudouresuli Tuersun
Chencui Huang
Xiaoguang Zou
Xiaoguang Zou
Ya Qiu
Ya Qiu
Yunling Wang
Mayidili Nijiati
Mayidili Nijiati
MRI-based deep learning with clinical and imaging features to differentiate medulloblastoma and ependymoma in children
Frontiers in Molecular Biosciences
deep learning
magnetic resonance imaging
medulloblastoma
ependymoma
T2-weighted imaging
title MRI-based deep learning with clinical and imaging features to differentiate medulloblastoma and ependymoma in children
title_full MRI-based deep learning with clinical and imaging features to differentiate medulloblastoma and ependymoma in children
title_fullStr MRI-based deep learning with clinical and imaging features to differentiate medulloblastoma and ependymoma in children
title_full_unstemmed MRI-based deep learning with clinical and imaging features to differentiate medulloblastoma and ependymoma in children
title_short MRI-based deep learning with clinical and imaging features to differentiate medulloblastoma and ependymoma in children
title_sort mri based deep learning with clinical and imaging features to differentiate medulloblastoma and ependymoma in children
topic deep learning
magnetic resonance imaging
medulloblastoma
ependymoma
T2-weighted imaging
url https://www.frontiersin.org/articles/10.3389/fmolb.2025.1570860/full
work_keys_str_mv AT yasenyimit mribaseddeeplearningwithclinicalandimagingfeaturestodifferentiatemedulloblastomaandependymomainchildren
AT yasenyimit mribaseddeeplearningwithclinicalandimagingfeaturestodifferentiatemedulloblastomaandependymomainchildren
AT parhatyasin mribaseddeeplearningwithclinicalandimagingfeaturestodifferentiatemedulloblastomaandependymomainchildren
AT yuehao mribaseddeeplearningwithclinicalandimagingfeaturestodifferentiatemedulloblastomaandependymomainchildren
AT abudouresulituersun mribaseddeeplearningwithclinicalandimagingfeaturestodifferentiatemedulloblastomaandependymomainchildren
AT abudouresulituersun mribaseddeeplearningwithclinicalandimagingfeaturestodifferentiatemedulloblastomaandependymomainchildren
AT chencuihuang mribaseddeeplearningwithclinicalandimagingfeaturestodifferentiatemedulloblastomaandependymomainchildren
AT xiaoguangzou mribaseddeeplearningwithclinicalandimagingfeaturestodifferentiatemedulloblastomaandependymomainchildren
AT xiaoguangzou mribaseddeeplearningwithclinicalandimagingfeaturestodifferentiatemedulloblastomaandependymomainchildren
AT yaqiu mribaseddeeplearningwithclinicalandimagingfeaturestodifferentiatemedulloblastomaandependymomainchildren
AT yaqiu mribaseddeeplearningwithclinicalandimagingfeaturestodifferentiatemedulloblastomaandependymomainchildren
AT yunlingwang mribaseddeeplearningwithclinicalandimagingfeaturestodifferentiatemedulloblastomaandependymomainchildren
AT mayidilinijiati mribaseddeeplearningwithclinicalandimagingfeaturestodifferentiatemedulloblastomaandependymomainchildren
AT mayidilinijiati mribaseddeeplearningwithclinicalandimagingfeaturestodifferentiatemedulloblastomaandependymomainchildren