URFM: A general Ultrasound Representation Foundation Model for advancing ultrasound image diagnosis

Summary: Ultrasound imaging is critical for clinical diagnostics, providing insights into various diseases and organs. However, artificial intelligence (AI) in this field faces challenges, such as the need for large labeled datasets and limited task-specific model applicability, particularly due to...

Full description

Saved in:
Bibliographic Details
Main Authors: Qingbo Kang, Qicheng Lao, Jun Gao, Wuyongga Bao, Zhu He, Chenlin Du, Qiang Lu, Kang Li
Format: Article
Language:English
Published: Elsevier 2025-08-01
Series:iScience
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S2589004225011782
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Summary: Ultrasound imaging is critical for clinical diagnostics, providing insights into various diseases and organs. However, artificial intelligence (AI) in this field faces challenges, such as the need for large labeled datasets and limited task-specific model applicability, particularly due to ultrasound’s low signal-to-noise ratio (SNR). To overcome these, we introduce the Ultrasound Representation Foundation Model (URFM), designed to learn robust, generalizable representations from unlabeled ultrasound images, enabling label-efficient adaptation to diverse diagnostic tasks. URFM is pre-trained on over 1M images from 15 major anatomical organs using representation-based masked image modeling (MIM), an advanced self-supervised learning. Unlike traditional pixel-based MIM, URFM integrates high-level representations from BiomedCLIP, a specialized medical vision-language model, to address the low SNR issue. Extensive evaluation shows that URFM outperforms state-of-the-art methods, offering enhanced generalization, label efficiency, and training-time efficiency. URFM’s scalability and flexibility signal a significant advancement in diagnostic accuracy and clinical workflow optimization in ultrasound imaging.
ISSN:2589-0042