A hybrid multi-instance learning-based identification of gastric adenocarcinoma differentiation on whole-slide images

Abstract Objective To investigate the potential of a hybrid multi-instance learning model (TGMIL) combining Transformer and graph attention networks for classifying gastric adenocarcinoma differentiation on whole-slide images (WSIs) without manual annotation. Methods and materials A hybrid multi-ins...

Full description

Saved in:
Bibliographic Details
Main Authors: Mudan Zhang, Xinhuan Sun, Wuchao Li, Yin Cao, Chen Liu, Guilan Tu, Jian Wang, Rongpin Wang
Format: Article
Language:English
Published: BMC 2025-06-01
Series:BioMedical Engineering OnLine
Subjects:
Online Access:https://doi.org/10.1186/s12938-025-01407-3
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1849420595147571200
author Mudan Zhang
Xinhuan Sun
Wuchao Li
Yin Cao
Chen Liu
Guilan Tu
Jian Wang
Rongpin Wang
author_facet Mudan Zhang
Xinhuan Sun
Wuchao Li
Yin Cao
Chen Liu
Guilan Tu
Jian Wang
Rongpin Wang
author_sort Mudan Zhang
collection DOAJ
description Abstract Objective To investigate the potential of a hybrid multi-instance learning model (TGMIL) combining Transformer and graph attention networks for classifying gastric adenocarcinoma differentiation on whole-slide images (WSIs) without manual annotation. Methods and materials A hybrid multi-instance learning model is proposed based on the Transformer and the graph attention network, called TGMIL, to classify the differentiation of gastric adenocarcinoma. A total of 613 WSIs from patients with gastric adenocarcinoma were retrospectively collected from two different hospitals. According to the differentiation of gastric adenocarcinoma, the data were divided into four groups: normal group (n = 254), well differentiation group (n = 166), moderately differentiation group (n = 75), and poorly differentiation group (n = 118). The gold standard of differentiation classification was blindly established by two gastrointestinal pathologists. The WSIs were randomly split into a training dataset consisting of 494 images and a testing dataset consisting of 119 images. Within the training set, the WSI count of the normal, well, moderately, and poorly differential groups was 203, 131, 62, and 98 individuals, respectively. Within the test set, the corresponding WSI count was 51, 35, 13, and 20 individuals. Results The TGMIL model developed for the differential prediction task exhibited remarkable efficiency when considering sensitivity, specificity, and the area under the curve (AUC) values. We also conducted a comparative analysis to assess the efficiency of five other models, namely MIL, CLAM_SB, CLAM_MB, DSMIL, and TransMIL, in classifying the differentiation of gastric cancer. The TGMIL model achieved a sensitivity of 73.33% and a specificity of 91.11%, with an AUC value of 0.86. Conclusions The hybrid multi-instance learning model TGMIL could accurately classify the differentiation of gastric adenocarcinoma using WSI without the need for labor-intensive and time-consuming manual annotations, which will improve the efficiency and objectivity of diagnosis.
format Article
id doaj-art-a899d3d1158f40cbb70593df78f34b19
institution Kabale University
issn 1475-925X
language English
publishDate 2025-06-01
publisher BMC
record_format Article
series BioMedical Engineering OnLine
spelling doaj-art-a899d3d1158f40cbb70593df78f34b192025-08-20T03:31:42ZengBMCBioMedical Engineering OnLine1475-925X2025-06-0124111410.1186/s12938-025-01407-3A hybrid multi-instance learning-based identification of gastric adenocarcinoma differentiation on whole-slide imagesMudan Zhang0Xinhuan Sun1Wuchao Li2Yin Cao3Chen Liu4Guilan Tu5Jian Wang6Rongpin Wang7Department of Radiology, Guizhou Provincial Key Laboratory of Intelligent Medical Image Analysis and Precision Diagnosis, Guizhou Provincial People’s HospitalDepartment of Radiology, Guizhou Provincial Key Laboratory of Intelligent Medical Image Analysis and Precision Diagnosis, Guizhou Provincial People’s HospitalDepartment of Radiology, Guizhou Provincial Key Laboratory of Intelligent Medical Image Analysis and Precision Diagnosis, Guizhou Provincial People’s HospitalDepartment of pathology, Guizhou Provincial People’s HospitalDepartment of Radiology, Southwest Hospital, Army Medical University (Third Military Medical University)Laboratory department, Guizhou Provincial Center for Clinical LaboratoryDepartment of Radiology, Southwest Hospital, Army Medical University (Third Military Medical University)Department of Radiology, Guizhou Provincial Key Laboratory of Intelligent Medical Image Analysis and Precision Diagnosis, Guizhou Provincial People’s HospitalAbstract Objective To investigate the potential of a hybrid multi-instance learning model (TGMIL) combining Transformer and graph attention networks for classifying gastric adenocarcinoma differentiation on whole-slide images (WSIs) without manual annotation. Methods and materials A hybrid multi-instance learning model is proposed based on the Transformer and the graph attention network, called TGMIL, to classify the differentiation of gastric adenocarcinoma. A total of 613 WSIs from patients with gastric adenocarcinoma were retrospectively collected from two different hospitals. According to the differentiation of gastric adenocarcinoma, the data were divided into four groups: normal group (n = 254), well differentiation group (n = 166), moderately differentiation group (n = 75), and poorly differentiation group (n = 118). The gold standard of differentiation classification was blindly established by two gastrointestinal pathologists. The WSIs were randomly split into a training dataset consisting of 494 images and a testing dataset consisting of 119 images. Within the training set, the WSI count of the normal, well, moderately, and poorly differential groups was 203, 131, 62, and 98 individuals, respectively. Within the test set, the corresponding WSI count was 51, 35, 13, and 20 individuals. Results The TGMIL model developed for the differential prediction task exhibited remarkable efficiency when considering sensitivity, specificity, and the area under the curve (AUC) values. We also conducted a comparative analysis to assess the efficiency of five other models, namely MIL, CLAM_SB, CLAM_MB, DSMIL, and TransMIL, in classifying the differentiation of gastric cancer. The TGMIL model achieved a sensitivity of 73.33% and a specificity of 91.11%, with an AUC value of 0.86. Conclusions The hybrid multi-instance learning model TGMIL could accurately classify the differentiation of gastric adenocarcinoma using WSI without the need for labor-intensive and time-consuming manual annotations, which will improve the efficiency and objectivity of diagnosis.https://doi.org/10.1186/s12938-025-01407-3Gastric adenocarcinomaMulti-instance learningDifferentiationWhole-slide imagesTGMIL
spellingShingle Mudan Zhang
Xinhuan Sun
Wuchao Li
Yin Cao
Chen Liu
Guilan Tu
Jian Wang
Rongpin Wang
A hybrid multi-instance learning-based identification of gastric adenocarcinoma differentiation on whole-slide images
BioMedical Engineering OnLine
Gastric adenocarcinoma
Multi-instance learning
Differentiation
Whole-slide images
TGMIL
title A hybrid multi-instance learning-based identification of gastric adenocarcinoma differentiation on whole-slide images
title_full A hybrid multi-instance learning-based identification of gastric adenocarcinoma differentiation on whole-slide images
title_fullStr A hybrid multi-instance learning-based identification of gastric adenocarcinoma differentiation on whole-slide images
title_full_unstemmed A hybrid multi-instance learning-based identification of gastric adenocarcinoma differentiation on whole-slide images
title_short A hybrid multi-instance learning-based identification of gastric adenocarcinoma differentiation on whole-slide images
title_sort hybrid multi instance learning based identification of gastric adenocarcinoma differentiation on whole slide images
topic Gastric adenocarcinoma
Multi-instance learning
Differentiation
Whole-slide images
TGMIL
url https://doi.org/10.1186/s12938-025-01407-3
work_keys_str_mv AT mudanzhang ahybridmultiinstancelearningbasedidentificationofgastricadenocarcinomadifferentiationonwholeslideimages
AT xinhuansun ahybridmultiinstancelearningbasedidentificationofgastricadenocarcinomadifferentiationonwholeslideimages
AT wuchaoli ahybridmultiinstancelearningbasedidentificationofgastricadenocarcinomadifferentiationonwholeslideimages
AT yincao ahybridmultiinstancelearningbasedidentificationofgastricadenocarcinomadifferentiationonwholeslideimages
AT chenliu ahybridmultiinstancelearningbasedidentificationofgastricadenocarcinomadifferentiationonwholeslideimages
AT guilantu ahybridmultiinstancelearningbasedidentificationofgastricadenocarcinomadifferentiationonwholeslideimages
AT jianwang ahybridmultiinstancelearningbasedidentificationofgastricadenocarcinomadifferentiationonwholeslideimages
AT rongpinwang ahybridmultiinstancelearningbasedidentificationofgastricadenocarcinomadifferentiationonwholeslideimages
AT mudanzhang hybridmultiinstancelearningbasedidentificationofgastricadenocarcinomadifferentiationonwholeslideimages
AT xinhuansun hybridmultiinstancelearningbasedidentificationofgastricadenocarcinomadifferentiationonwholeslideimages
AT wuchaoli hybridmultiinstancelearningbasedidentificationofgastricadenocarcinomadifferentiationonwholeslideimages
AT yincao hybridmultiinstancelearningbasedidentificationofgastricadenocarcinomadifferentiationonwholeslideimages
AT chenliu hybridmultiinstancelearningbasedidentificationofgastricadenocarcinomadifferentiationonwholeslideimages
AT guilantu hybridmultiinstancelearningbasedidentificationofgastricadenocarcinomadifferentiationonwholeslideimages
AT jianwang hybridmultiinstancelearningbasedidentificationofgastricadenocarcinomadifferentiationonwholeslideimages
AT rongpinwang hybridmultiinstancelearningbasedidentificationofgastricadenocarcinomadifferentiationonwholeslideimages