Causality Extraction from Medical Text Using Large Language Models (LLMs)

This study explores the potential of natural language models, including large language models, to extract causal relations from medical texts, specifically from clinical practice guidelines (CPGs). The outcomes of causality extraction from clinical practice guidelines for gestational diabetes are pr...

Full description

Saved in:
Bibliographic Details
Main Authors: Seethalakshmi Gopalakrishnan, Luciana Garbayo, Wlodek Zadrozny
Format: Article
Language:English
Published: MDPI AG 2024-12-01
Series:Information
Subjects:
Online Access:https://www.mdpi.com/2078-2489/16/1/13
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This study explores the potential of natural language models, including large language models, to extract causal relations from medical texts, specifically from clinical practice guidelines (CPGs). The outcomes of causality extraction from clinical practice guidelines for gestational diabetes are presented, marking a first in the field. The results are reported on a set of experiments using variants of BERT (BioBERT, DistilBERT, and BERT) and using newer large language models (LLMs), namely, GPT-4 and LLAMA2. Our experiments show that BioBERT performed better than other models, including the large language models, with an average F1-score of 0.72. The GPT-4 and LLAMA2 results show similar performance but less consistency. The code and an annotated corpus of causal statements within the clinical practice guidelines for gestational diabetes are released. Extracting causal structures might help identify LLMs’ hallucinations and possibly prevent some medical errors if LLMs are used in patient settings. Some practical extensions of extracting causal statements from medical text would include providing additional diagnostic support based on less frequent cause–effect relationships, identifying possible inconsistencies in medical guidelines, and evaluating the evidence for recommendations.
ISSN:2078-2489