Aligning knowledge concepts to whole slide images for precise histopathology image analysis

Abstract Due to the large size and lack of fine-grained annotation, Whole Slide Images (WSIs) analysis is commonly approached as a Multiple Instance Learning (MIL) problem. However, previous studies only learn from training data, posing a stark contrast to how human clinicians teach each other and r...

Full description

Saved in:
Bibliographic Details
Main Authors: Weiqin Zhao, Ziyu Guo, Yinshuang Fan, Yuming Jiang, Maximus C. F. Yeung, Lequan Yu
Format: Article
Language:English
Published: Nature Portfolio 2024-12-01
Series:npj Digital Medicine
Online Access:https://doi.org/10.1038/s41746-024-01411-2
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract Due to the large size and lack of fine-grained annotation, Whole Slide Images (WSIs) analysis is commonly approached as a Multiple Instance Learning (MIL) problem. However, previous studies only learn from training data, posing a stark contrast to how human clinicians teach each other and reason about histopathologic entities and factors. Here, we present a novel knowledge concept-based MIL framework, named ConcepPath, to fill this gap. Specifically, ConcepPath utilizes GPT-4 to induce reliable disease-specific human expert concepts from medical literature and incorporate them with a group of purely learnable concepts to extract complementary knowledge from training data. In ConcepPath, WSIs are aligned to these linguistic knowledge concepts by utilizing the pathology vision-language model as the basic building component. In the application of lung cancer subtyping, breast cancer HER2 scoring, and gastric cancer immunotherapy-sensitive subtyping tasks, ConcepPath significantly outperformed previous SOTA methods, which lacked the guidance of human expert knowledge.
ISSN:2398-6352