Systematic scoping review of external validation studies of AI pathology models for lung cancer diagnosis

Abstract Clinical adoption of digital pathology-based artificial intelligence models for diagnosing lung cancer has been limited, partly due to lack of robust external validation. This review provides an overview of such tools, their performance and external validation. We systematically searched fo...

Full description

Saved in:
Bibliographic Details
Main Authors: Soumya Arun, Mariia Grosheva, Mark Kosenko, Jan Lukas Robertus, Oleg Blyuss, Rhian Gabe, Daniel Munblit, Judith Offman
Format: Article
Language:English
Published: Nature Portfolio 2025-06-01
Series:npj Precision Oncology
Online Access:https://doi.org/10.1038/s41698-025-00940-7
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1849725441420558336
author Soumya Arun
Mariia Grosheva
Mark Kosenko
Jan Lukas Robertus
Oleg Blyuss
Rhian Gabe
Daniel Munblit
Judith Offman
author_facet Soumya Arun
Mariia Grosheva
Mark Kosenko
Jan Lukas Robertus
Oleg Blyuss
Rhian Gabe
Daniel Munblit
Judith Offman
author_sort Soumya Arun
collection DOAJ
description Abstract Clinical adoption of digital pathology-based artificial intelligence models for diagnosing lung cancer has been limited, partly due to lack of robust external validation. This review provides an overview of such tools, their performance and external validation. We systematically searched for external validation studies in medical, engineering and grey literature databases from 1st January 2010 to 31st October 2024. 22 studies were included. Models performed various tasks, including classification of malignant versus non-malignant tissue, tumour growth pattern classification and subtyping of adeno- versus squamous cell carcinomas. Subtyping models were most common and performed highly, with average AUC values ranging from 0.746 to 0.999. Although most studies used restricted datasets, methodological issues relevant to the applicability of models in real-world settings included small and/or non-representative datasets, retrospective studies and case-control studies without further real-world validation. Ultimately, more rigorous external validation of models is warranted for increased clinical adoption.
format Article
id doaj-art-c787ca603a804ec09ee43f7e2516862a
institution DOAJ
issn 2397-768X
language English
publishDate 2025-06-01
publisher Nature Portfolio
record_format Article
series npj Precision Oncology
spelling doaj-art-c787ca603a804ec09ee43f7e2516862a2025-08-20T03:10:28ZengNature Portfolionpj Precision Oncology2397-768X2025-06-019111110.1038/s41698-025-00940-7Systematic scoping review of external validation studies of AI pathology models for lung cancer diagnosisSoumya Arun0Mariia Grosheva1Mark Kosenko2Jan Lukas Robertus3Oleg Blyuss4Rhian Gabe5Daniel Munblit6Judith Offman7Centre for Cancer Screening, Prevention and Early Diagnosis, Wolfson Institute of Population Health, Queen Mary University of LondonDepartment of Paediatrics and Paediatric Infectious Diseases, Institute of Child’s Health, I.M. Sechenov First Moscow State Medical University, Sechenov UniversityDepartment of Paediatrics and Paediatric Infectious Diseases, Institute of Child’s Health, I.M. Sechenov First Moscow State Medical University, Sechenov UniversityDepartment of Histopathology, Royal Brompton and Harefield, Guy’s and St Thomas’ NHS Foundation TrustCentre for Cancer Screening, Prevention and Early Diagnosis, Wolfson Institute of Population Health, Queen Mary University of LondonCentre for Evaluation and Methods, Wolfson Institute of Population Health, Queen Mary University of LondonDepartment of Paediatrics and Paediatric Infectious Diseases, Institute of Child’s Health, I.M. Sechenov First Moscow State Medical University, Sechenov UniversityCentre for Cancer Screening, Prevention and Early Diagnosis, Wolfson Institute of Population Health, Queen Mary University of LondonAbstract Clinical adoption of digital pathology-based artificial intelligence models for diagnosing lung cancer has been limited, partly due to lack of robust external validation. This review provides an overview of such tools, their performance and external validation. We systematically searched for external validation studies in medical, engineering and grey literature databases from 1st January 2010 to 31st October 2024. 22 studies were included. Models performed various tasks, including classification of malignant versus non-malignant tissue, tumour growth pattern classification and subtyping of adeno- versus squamous cell carcinomas. Subtyping models were most common and performed highly, with average AUC values ranging from 0.746 to 0.999. Although most studies used restricted datasets, methodological issues relevant to the applicability of models in real-world settings included small and/or non-representative datasets, retrospective studies and case-control studies without further real-world validation. Ultimately, more rigorous external validation of models is warranted for increased clinical adoption.https://doi.org/10.1038/s41698-025-00940-7
spellingShingle Soumya Arun
Mariia Grosheva
Mark Kosenko
Jan Lukas Robertus
Oleg Blyuss
Rhian Gabe
Daniel Munblit
Judith Offman
Systematic scoping review of external validation studies of AI pathology models for lung cancer diagnosis
npj Precision Oncology
title Systematic scoping review of external validation studies of AI pathology models for lung cancer diagnosis
title_full Systematic scoping review of external validation studies of AI pathology models for lung cancer diagnosis
title_fullStr Systematic scoping review of external validation studies of AI pathology models for lung cancer diagnosis
title_full_unstemmed Systematic scoping review of external validation studies of AI pathology models for lung cancer diagnosis
title_short Systematic scoping review of external validation studies of AI pathology models for lung cancer diagnosis
title_sort systematic scoping review of external validation studies of ai pathology models for lung cancer diagnosis
url https://doi.org/10.1038/s41698-025-00940-7
work_keys_str_mv AT soumyaarun systematicscopingreviewofexternalvalidationstudiesofaipathologymodelsforlungcancerdiagnosis
AT mariiagrosheva systematicscopingreviewofexternalvalidationstudiesofaipathologymodelsforlungcancerdiagnosis
AT markkosenko systematicscopingreviewofexternalvalidationstudiesofaipathologymodelsforlungcancerdiagnosis
AT janlukasrobertus systematicscopingreviewofexternalvalidationstudiesofaipathologymodelsforlungcancerdiagnosis
AT olegblyuss systematicscopingreviewofexternalvalidationstudiesofaipathologymodelsforlungcancerdiagnosis
AT rhiangabe systematicscopingreviewofexternalvalidationstudiesofaipathologymodelsforlungcancerdiagnosis
AT danielmunblit systematicscopingreviewofexternalvalidationstudiesofaipathologymodelsforlungcancerdiagnosis
AT judithoffman systematicscopingreviewofexternalvalidationstudiesofaipathologymodelsforlungcancerdiagnosis