CSEPC: a deep learning framework for classifying small-sample multimodal medical image data in Alzheimer’s disease

Abstract Background Alzheimer’s disease (AD) is a neurodegenerative disorder that significantly impacts health care worldwide, particularly among the elderly population. The accurate classification of AD stages is essential for slowing disease progression and guiding effective interventions. However...

Full description

Saved in:
Bibliographic Details
Main Authors: Jingyuan Liu, Xiaojie Yu, Hidenao Fukuyama, Toshiya Murai, Jinglong Wu, Qi Li, Zhilin Zhang
Format: Article
Language:English
Published: BMC 2025-02-01
Series:BMC Geriatrics
Subjects:
Online Access:https://doi.org/10.1186/s12877-025-05771-6
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1849768574613192704
author Jingyuan Liu
Xiaojie Yu
Hidenao Fukuyama
Toshiya Murai
Jinglong Wu
Qi Li
Zhilin Zhang
author_facet Jingyuan Liu
Xiaojie Yu
Hidenao Fukuyama
Toshiya Murai
Jinglong Wu
Qi Li
Zhilin Zhang
author_sort Jingyuan Liu
collection DOAJ
description Abstract Background Alzheimer’s disease (AD) is a neurodegenerative disorder that significantly impacts health care worldwide, particularly among the elderly population. The accurate classification of AD stages is essential for slowing disease progression and guiding effective interventions. However, limited sample sizes continue to present a significant challenge in classifying the stages of AD progression. Addressing this obstacle is crucial for improving diagnostic accuracy and optimizing treatment strategies for those affected by AD. Methods In this study, we proposed cross-scale equilibrium pyramid coupling (CSEPC), which is a novel diagnostic algorithm designed for small-sample multimodal medical imaging data. CSEPC leverages scale equilibrium theory and modal coupling properties to integrate semantic features from different imaging modalities and across multiple scales within each modality. The architecture first extracts balanced multiscale features from structural MRI (sMRI) data and functional MRI (fMRI) data using a cross-scale pyramid module. These features are then combined through a contrastive learning-based cosine similarity coupling mechanism to capture intermodality associations effectively. This approach enhances the representation of both inter- and intramodal features while significantly reducing the number of learning parameters, making it highly suitable for small sample environments. We validated the effectiveness of the CSEPC model through experiments on the Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset and demonstrated its superior performance in diagnosing and staging AD. Results Our experimental results demonstrate that the proposed model matches or exceeds the performance of models used in previous studies in AD classification. Specifically, the model achieved an accuracy of 85.67% and an area under the curve (AUC) of 0.98 in classifying the progression from mild cognitive impairment (MCI) to AD. To further validate its effectiveness, we used our method to diagnose different stages of AD. In both classification tasks, our approach delivered superior performance. Conclusions In conclusion, the performance of our model in various tasks has demonstrated its significant potential in the field of small-sample multimodal medical imaging classification, particularly AD classification. This advancement could significantly assist clinicians in effectively managing and intervening in the disease progression of patients with early-stage AD.
format Article
id doaj-art-27ab13a099e64967ae92c2f860cf58c7
institution DOAJ
issn 1471-2318
language English
publishDate 2025-02-01
publisher BMC
record_format Article
series BMC Geriatrics
spelling doaj-art-27ab13a099e64967ae92c2f860cf58c72025-08-20T03:03:45ZengBMCBMC Geriatrics1471-23182025-02-0125111510.1186/s12877-025-05771-6CSEPC: a deep learning framework for classifying small-sample multimodal medical image data in Alzheimer’s diseaseJingyuan Liu0Xiaojie Yu1Hidenao Fukuyama2Toshiya Murai3Jinglong Wu4Qi Li5Zhilin Zhang6School of Computer Science and Technology, Changchun University of Science and TechnologySchool of Computer Science and Technology, Changchun University of Science and TechnologyResearch Center for Medical Artificial Intelligence, Shenzhen Institute of Advanced Technology, Chinese Academy of SciencesDepartment of Psychiatry, Graduate School of Medicine, Kyoto UniversityResearch Center for Medical Artificial Intelligence, Shenzhen Institute of Advanced Technology, Chinese Academy of SciencesSchool of Computer Science and Technology, Changchun University of Science and TechnologyResearch Center for Medical Artificial Intelligence, Shenzhen Institute of Advanced Technology, Chinese Academy of SciencesAbstract Background Alzheimer’s disease (AD) is a neurodegenerative disorder that significantly impacts health care worldwide, particularly among the elderly population. The accurate classification of AD stages is essential for slowing disease progression and guiding effective interventions. However, limited sample sizes continue to present a significant challenge in classifying the stages of AD progression. Addressing this obstacle is crucial for improving diagnostic accuracy and optimizing treatment strategies for those affected by AD. Methods In this study, we proposed cross-scale equilibrium pyramid coupling (CSEPC), which is a novel diagnostic algorithm designed for small-sample multimodal medical imaging data. CSEPC leverages scale equilibrium theory and modal coupling properties to integrate semantic features from different imaging modalities and across multiple scales within each modality. The architecture first extracts balanced multiscale features from structural MRI (sMRI) data and functional MRI (fMRI) data using a cross-scale pyramid module. These features are then combined through a contrastive learning-based cosine similarity coupling mechanism to capture intermodality associations effectively. This approach enhances the representation of both inter- and intramodal features while significantly reducing the number of learning parameters, making it highly suitable for small sample environments. We validated the effectiveness of the CSEPC model through experiments on the Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset and demonstrated its superior performance in diagnosing and staging AD. Results Our experimental results demonstrate that the proposed model matches or exceeds the performance of models used in previous studies in AD classification. Specifically, the model achieved an accuracy of 85.67% and an area under the curve (AUC) of 0.98 in classifying the progression from mild cognitive impairment (MCI) to AD. To further validate its effectiveness, we used our method to diagnose different stages of AD. In both classification tasks, our approach delivered superior performance. Conclusions In conclusion, the performance of our model in various tasks has demonstrated its significant potential in the field of small-sample multimodal medical imaging classification, particularly AD classification. This advancement could significantly assist clinicians in effectively managing and intervening in the disease progression of patients with early-stage AD.https://doi.org/10.1186/s12877-025-05771-6Alzheimer’s diseaseDeep learningSmall sampleIntermodality and intramodalityMultimodal medical images
spellingShingle Jingyuan Liu
Xiaojie Yu
Hidenao Fukuyama
Toshiya Murai
Jinglong Wu
Qi Li
Zhilin Zhang
CSEPC: a deep learning framework for classifying small-sample multimodal medical image data in Alzheimer’s disease
BMC Geriatrics
Alzheimer’s disease
Deep learning
Small sample
Intermodality and intramodality
Multimodal medical images
title CSEPC: a deep learning framework for classifying small-sample multimodal medical image data in Alzheimer’s disease
title_full CSEPC: a deep learning framework for classifying small-sample multimodal medical image data in Alzheimer’s disease
title_fullStr CSEPC: a deep learning framework for classifying small-sample multimodal medical image data in Alzheimer’s disease
title_full_unstemmed CSEPC: a deep learning framework for classifying small-sample multimodal medical image data in Alzheimer’s disease
title_short CSEPC: a deep learning framework for classifying small-sample multimodal medical image data in Alzheimer’s disease
title_sort csepc a deep learning framework for classifying small sample multimodal medical image data in alzheimer s disease
topic Alzheimer’s disease
Deep learning
Small sample
Intermodality and intramodality
Multimodal medical images
url https://doi.org/10.1186/s12877-025-05771-6
work_keys_str_mv AT jingyuanliu csepcadeeplearningframeworkforclassifyingsmallsamplemultimodalmedicalimagedatainalzheimersdisease
AT xiaojieyu csepcadeeplearningframeworkforclassifyingsmallsamplemultimodalmedicalimagedatainalzheimersdisease
AT hidenaofukuyama csepcadeeplearningframeworkforclassifyingsmallsamplemultimodalmedicalimagedatainalzheimersdisease
AT toshiyamurai csepcadeeplearningframeworkforclassifyingsmallsamplemultimodalmedicalimagedatainalzheimersdisease
AT jinglongwu csepcadeeplearningframeworkforclassifyingsmallsamplemultimodalmedicalimagedatainalzheimersdisease
AT qili csepcadeeplearningframeworkforclassifyingsmallsamplemultimodalmedicalimagedatainalzheimersdisease
AT zhilinzhang csepcadeeplearningframeworkforclassifyingsmallsamplemultimodalmedicalimagedatainalzheimersdisease