Lung adenocarcinoma subtype classification based on contrastive learning model with multimodal integration

Abstract Accurately identifying the stages of lung adenocarcinoma is essential for selecting the most appropriate treatment plans. Nonetheless, this task is complicated due to challenges such as integrating diverse data, similarities among subtypes, and the need to capture contextual features, makin...

Full description

Saved in:
Bibliographic Details
Main Authors: Changmiao Wang, Lijian Liu, Chenchen Fan, Yongquan Zhang, Zhijun Mai, Li Li, Zhou Liu, Yuan Tian, Jiahang Hu, Ahmed Elazab
Format: Article
Language:English
Published: Nature Portfolio 2025-08-01
Series:Scientific Reports
Subjects:
Online Access:https://doi.org/10.1038/s41598-025-13818-2
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract Accurately identifying the stages of lung adenocarcinoma is essential for selecting the most appropriate treatment plans. Nonetheless, this task is complicated due to challenges such as integrating diverse data, similarities among subtypes, and the need to capture contextual features, making precise differentiation difficult. We address these challenges and propose a multimodal deep neural network that integrates computed tomography (CT) images, annotated lesion bounding boxes, and electronic health records. Our model first combines bounding boxes with precise lesion location data and CT scans, generating a richer semantic representation through feature extraction from regions of interest to enhance localization accuracy using a vision transformer module. Beyond imaging data, the model also incorporates clinical information encoded using a fully connected encoder. Features extracted from both CT and clinical data are optimized for cosine similarity using a contrastive language-image pre-training module, ensuring they are cohesively integrated. In addition, we introduce an attention-based feature fusion module that harmonizes these features into a unified representation to fuse features from different modalities further. This integrated feature set is then fed into a classifier that effectively distinguishes among the three types of adenocarcinomas. Finally, we employ focal loss to mitigate the effects of unbalanced classes and contrastive learning loss to enhance feature representation and improve the model’s performance. Our experiments on public and proprietary datasets demonstrate the efficiency of our model, achieving a superior validation accuracy of 81.42% and an area under the curve of 0.9120. These results significantly outperform recent multimodal classification approaches. The code is available at https://github.com/fancccc/LungCancerDC .
ISSN:2045-2322