The future of healthcare using multimodal AI: Technology that can read, see, hear and sense

With the emergence of large language models like ChatGPT, Artificial Intelligence (AI) has rapidly advanced, garnering widespread usage in various fields, including healthcare. This article explores the trajectory of AI in medicine, examining the transition from unimodal deep learning models to mult...

Full description

Saved in:
Bibliographic Details
Main Author: Divya Rao
Format: Article
Language:English
Published: Elsevier 2024-06-01
Series:Oral Oncology Reports
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S2772906024001869
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:With the emergence of large language models like ChatGPT, Artificial Intelligence (AI) has rapidly advanced, garnering widespread usage in various fields, including healthcare. This article explores the trajectory of AI in medicine, examining the transition from unimodal deep learning models to multimodal AI systems capable of processing diverse data formats. MM models that integrate text, images, and other sensory data, offer unprecedented capabilities, from providing comprehensive patient histories to aiding in differential diagnosis. However, ethical and security concerns accompany the use of extensive patient data. While multimodal models show promise, they are not infallible. By automating routine tasks, AI affords clinicians more time to focus on patient care, emphasizing the importance of human empathy and expertise in healthcare delivery. The article underscores the potential of multimodal AI in healthcare, provided it is developed and deployed responsibly with human clinical expertise.
ISSN:2772-9060