CSEPC: a deep learning framework for classifying small-sample multimodal medical image data in Alzheimer’s disease

Abstract Background Alzheimer’s disease (AD) is a neurodegenerative disorder that significantly impacts health care worldwide, particularly among the elderly population. The accurate classification of AD stages is essential for slowing disease progression and guiding effective interventions. However...

Full description

Saved in:
Bibliographic Details
Main Authors: Jingyuan Liu, Xiaojie Yu, Hidenao Fukuyama, Toshiya Murai, Jinglong Wu, Qi Li, Zhilin Zhang
Format: Article
Language:English
Published: BMC 2025-02-01
Series:BMC Geriatrics
Subjects:
Online Access:https://doi.org/10.1186/s12877-025-05771-6
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract Background Alzheimer’s disease (AD) is a neurodegenerative disorder that significantly impacts health care worldwide, particularly among the elderly population. The accurate classification of AD stages is essential for slowing disease progression and guiding effective interventions. However, limited sample sizes continue to present a significant challenge in classifying the stages of AD progression. Addressing this obstacle is crucial for improving diagnostic accuracy and optimizing treatment strategies for those affected by AD. Methods In this study, we proposed cross-scale equilibrium pyramid coupling (CSEPC), which is a novel diagnostic algorithm designed for small-sample multimodal medical imaging data. CSEPC leverages scale equilibrium theory and modal coupling properties to integrate semantic features from different imaging modalities and across multiple scales within each modality. The architecture first extracts balanced multiscale features from structural MRI (sMRI) data and functional MRI (fMRI) data using a cross-scale pyramid module. These features are then combined through a contrastive learning-based cosine similarity coupling mechanism to capture intermodality associations effectively. This approach enhances the representation of both inter- and intramodal features while significantly reducing the number of learning parameters, making it highly suitable for small sample environments. We validated the effectiveness of the CSEPC model through experiments on the Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset and demonstrated its superior performance in diagnosing and staging AD. Results Our experimental results demonstrate that the proposed model matches or exceeds the performance of models used in previous studies in AD classification. Specifically, the model achieved an accuracy of 85.67% and an area under the curve (AUC) of 0.98 in classifying the progression from mild cognitive impairment (MCI) to AD. To further validate its effectiveness, we used our method to diagnose different stages of AD. In both classification tasks, our approach delivered superior performance. Conclusions In conclusion, the performance of our model in various tasks has demonstrated its significant potential in the field of small-sample multimodal medical imaging classification, particularly AD classification. This advancement could significantly assist clinicians in effectively managing and intervening in the disease progression of patients with early-stage AD.
ISSN:1471-2318