LLM-driven multimodal target volume contouring in radiation oncology

Abstract Target volume contouring for radiation therapy is considered significantly more challenging than the normal organ segmentation tasks as it necessitates the utilization of both image and text-based clinical information. Inspired by the recent advancement of large language models (LLMs) that...

Full description

Saved in:
Bibliographic Details
Main Authors: Yujin Oh, Sangjoon Park, Hwa Kyung Byun, Yeona Cho, Ik Jae Lee, Jin Sung Kim, Jong Chul Ye
Format: Article
Language:English
Published: Nature Portfolio 2024-10-01
Series:Nature Communications
Online Access:https://doi.org/10.1038/s41467-024-53387-y
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract Target volume contouring for radiation therapy is considered significantly more challenging than the normal organ segmentation tasks as it necessitates the utilization of both image and text-based clinical information. Inspired by the recent advancement of large language models (LLMs) that can facilitate the integration of the textural information and images, here we present an LLM-driven multimodal artificial intelligence (AI), namely LLMSeg, that utilizes the clinical information and is applicable to the challenging task of 3-dimensional context-aware target volume delineation for radiation oncology. We validate our proposed LLMSeg within the context of breast cancer radiotherapy using external validation and data-insufficient environments, which attributes highly conducive to real-world applications. We demonstrate that the proposed multimodal LLMSeg exhibits markedly improved performance compared to conventional unimodal AI models, particularly exhibiting robust generalization performance and data-efficiency.
ISSN:2041-1723