Deep learning quantifies pathologists’ visual patterns for whole slide image diagnosis
Abstract Based on the expertise of pathologists, the pixelwise manual annotation has provided substantial support for training deep learning models of whole slide images (WSI)-assisted diagnostic. However, the collection of pixelwise annotation demands massive annotation time from pathologists, lead...
Saved in:
| Main Authors: | , , , , , , , , , , , , , , , , , , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Nature Portfolio
2025-07-01
|
| Series: | Nature Communications |
| Online Access: | https://doi.org/10.1038/s41467-025-60307-1 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1849238519228137472 |
|---|---|
| author | Tianhang Nan Song Zheng Siyuan Qiao Hao Quan Xin Gao Jun Niu Bin Zheng Chunfang Guo Yue Zhang Xiaoqin Wang Liping Zhao Ze Wu Yaoxing Guo Xingyu Li Mingchen Zou Shuangdi Ning Yue Zhao Wei Qian Hongduo Chen Ruiqun Qi Xinghua Gao Xiaoyu Cui |
| author_facet | Tianhang Nan Song Zheng Siyuan Qiao Hao Quan Xin Gao Jun Niu Bin Zheng Chunfang Guo Yue Zhang Xiaoqin Wang Liping Zhao Ze Wu Yaoxing Guo Xingyu Li Mingchen Zou Shuangdi Ning Yue Zhao Wei Qian Hongduo Chen Ruiqun Qi Xinghua Gao Xiaoyu Cui |
| author_sort | Tianhang Nan |
| collection | DOAJ |
| description | Abstract Based on the expertise of pathologists, the pixelwise manual annotation has provided substantial support for training deep learning models of whole slide images (WSI)-assisted diagnostic. However, the collection of pixelwise annotation demands massive annotation time from pathologists, leading to a high burden of medical manpower resources, hindering to construct larger datasets and more precise diagnostic models. To obtain pathologists’ expertise with minimal pathologist workloads then achieve precise diagnostics, we collect the image review patterns of pathologists by eye-tracking devices. Simultaneously, we design a deep learning system: Pathology Expertise Acquisition Network (PEAN), based on the collected visual patterns, which can decode pathologists’ expertise and then diagnose WSIs. Eye-trackers reduce the time required for annotating WSIs to 4%, of the manual annotation. We evaluate PEAN on 5881 WSIs and 5 categories of skin lesions, achieving a high area under the curve of 0.992 and an accuracy of 96.3% on diagnostic prediction. This study fills the gap in existing models’ inability to learn from the diagnostic processes of pathologists. Its efficient data annotation and precise diagnostics provide assistance in both large-scale data collection and clinical care. |
| format | Article |
| id | doaj-art-22493a42682c4ae9a91c53aada7beb60 |
| institution | Kabale University |
| issn | 2041-1723 |
| language | English |
| publishDate | 2025-07-01 |
| publisher | Nature Portfolio |
| record_format | Article |
| series | Nature Communications |
| spelling | doaj-art-22493a42682c4ae9a91c53aada7beb602025-08-20T04:01:35ZengNature PortfolioNature Communications2041-17232025-07-0116111410.1038/s41467-025-60307-1Deep learning quantifies pathologists’ visual patterns for whole slide image diagnosisTianhang Nan0Song Zheng1Siyuan Qiao2Hao Quan3Xin Gao4Jun Niu5Bin Zheng6Chunfang Guo7Yue Zhang8Xiaoqin Wang9Liping Zhao10Ze Wu11Yaoxing Guo12Xingyu Li13Mingchen Zou14Shuangdi Ning15Yue Zhao16Wei Qian17Hongduo Chen18Ruiqun Qi19Xinghua Gao20Xiaoyu Cui21College of Medicine and Biological Information Engineering, Northeastern UniversityDepartment of Dermatology, The First Hospital of China Medical UniversityCollege of Computer Science and Technology, Fudan UniversityCollege of Medicine and Biological Information Engineering, Northeastern UniversityComputer Science Program, Computer, Electrical and Mathematical Sciences and Engineering Division, King Abdullah University of Science and Technology (KAUST)Department of Dermatology, General Hospital of Northern Theater CommandCollege of Medicine and Biological Information Engineering, Northeastern UniversityDepartment of Dermatology, Shenyang Seventh People’s HospitalDepartment of Dermatology, Shengjing hospital of China Medical UniversityCenter of Excellence on Generative AI, King Abdullah University of Science and TechnologyDepartment of Dermatology, Zhongyi Northeast International HospitalComputer Science Program, Computer, Electrical and Mathematical Sciences and Engineering Division, King Abdullah University of Science and Technology (KAUST)Department of Dermatology, The First Hospital of China Medical UniversityCollege of Medicine and Biological Information Engineering, Northeastern UniversityCollege of Medicine and Biological Information Engineering, Northeastern UniversityCollege of Medicine and Biological Information Engineering, Northeastern UniversityCollege of Medicine and Biological Information Engineering, Northeastern UniversityCollege of Medicine and Biological Information Engineering, Northeastern UniversityDepartment of Dermatology, The First Hospital of China Medical UniversityDepartment of Dermatology, The First Hospital of China Medical UniversityDepartment of Dermatology, The First Hospital of China Medical UniversityCollege of Medicine and Biological Information Engineering, Northeastern UniversityAbstract Based on the expertise of pathologists, the pixelwise manual annotation has provided substantial support for training deep learning models of whole slide images (WSI)-assisted diagnostic. However, the collection of pixelwise annotation demands massive annotation time from pathologists, leading to a high burden of medical manpower resources, hindering to construct larger datasets and more precise diagnostic models. To obtain pathologists’ expertise with minimal pathologist workloads then achieve precise diagnostics, we collect the image review patterns of pathologists by eye-tracking devices. Simultaneously, we design a deep learning system: Pathology Expertise Acquisition Network (PEAN), based on the collected visual patterns, which can decode pathologists’ expertise and then diagnose WSIs. Eye-trackers reduce the time required for annotating WSIs to 4%, of the manual annotation. We evaluate PEAN on 5881 WSIs and 5 categories of skin lesions, achieving a high area under the curve of 0.992 and an accuracy of 96.3% on diagnostic prediction. This study fills the gap in existing models’ inability to learn from the diagnostic processes of pathologists. Its efficient data annotation and precise diagnostics provide assistance in both large-scale data collection and clinical care.https://doi.org/10.1038/s41467-025-60307-1 |
| spellingShingle | Tianhang Nan Song Zheng Siyuan Qiao Hao Quan Xin Gao Jun Niu Bin Zheng Chunfang Guo Yue Zhang Xiaoqin Wang Liping Zhao Ze Wu Yaoxing Guo Xingyu Li Mingchen Zou Shuangdi Ning Yue Zhao Wei Qian Hongduo Chen Ruiqun Qi Xinghua Gao Xiaoyu Cui Deep learning quantifies pathologists’ visual patterns for whole slide image diagnosis Nature Communications |
| title | Deep learning quantifies pathologists’ visual patterns for whole slide image diagnosis |
| title_full | Deep learning quantifies pathologists’ visual patterns for whole slide image diagnosis |
| title_fullStr | Deep learning quantifies pathologists’ visual patterns for whole slide image diagnosis |
| title_full_unstemmed | Deep learning quantifies pathologists’ visual patterns for whole slide image diagnosis |
| title_short | Deep learning quantifies pathologists’ visual patterns for whole slide image diagnosis |
| title_sort | deep learning quantifies pathologists visual patterns for whole slide image diagnosis |
| url | https://doi.org/10.1038/s41467-025-60307-1 |
| work_keys_str_mv | AT tianhangnan deeplearningquantifiespathologistsvisualpatternsforwholeslideimagediagnosis AT songzheng deeplearningquantifiespathologistsvisualpatternsforwholeslideimagediagnosis AT siyuanqiao deeplearningquantifiespathologistsvisualpatternsforwholeslideimagediagnosis AT haoquan deeplearningquantifiespathologistsvisualpatternsforwholeslideimagediagnosis AT xingao deeplearningquantifiespathologistsvisualpatternsforwholeslideimagediagnosis AT junniu deeplearningquantifiespathologistsvisualpatternsforwholeslideimagediagnosis AT binzheng deeplearningquantifiespathologistsvisualpatternsforwholeslideimagediagnosis AT chunfangguo deeplearningquantifiespathologistsvisualpatternsforwholeslideimagediagnosis AT yuezhang deeplearningquantifiespathologistsvisualpatternsforwholeslideimagediagnosis AT xiaoqinwang deeplearningquantifiespathologistsvisualpatternsforwholeslideimagediagnosis AT lipingzhao deeplearningquantifiespathologistsvisualpatternsforwholeslideimagediagnosis AT zewu deeplearningquantifiespathologistsvisualpatternsforwholeslideimagediagnosis AT yaoxingguo deeplearningquantifiespathologistsvisualpatternsforwholeslideimagediagnosis AT xingyuli deeplearningquantifiespathologistsvisualpatternsforwholeslideimagediagnosis AT mingchenzou deeplearningquantifiespathologistsvisualpatternsforwholeslideimagediagnosis AT shuangdining deeplearningquantifiespathologistsvisualpatternsforwholeslideimagediagnosis AT yuezhao deeplearningquantifiespathologistsvisualpatternsforwholeslideimagediagnosis AT weiqian deeplearningquantifiespathologistsvisualpatternsforwholeslideimagediagnosis AT hongduochen deeplearningquantifiespathologistsvisualpatternsforwholeslideimagediagnosis AT ruiqunqi deeplearningquantifiespathologistsvisualpatternsforwholeslideimagediagnosis AT xinghuagao deeplearningquantifiespathologistsvisualpatternsforwholeslideimagediagnosis AT xiaoyucui deeplearningquantifiespathologistsvisualpatternsforwholeslideimagediagnosis |