To take a different approach: Can large language models provide knowledge related to respiratory aspiration?

Objective To investigate the performance (accuracy, comprehensiveness, consistency, and the necessary information ratio) of large language models (LLMs) in providing knowledge related to respiratory aspiration, and to explore the potential of using LLMs as training tools. Methods This study was a no...

Full description

Saved in:
Bibliographic Details
Main Authors: Yirou Niu, Shuojin Fu, Zehui Xuan, Ruifu Kang, Zhifang Ren, Shuai Jin, Yanling Wang, Qian Xiao
Format: Article
Language:English
Published: SAGE Publishing 2025-07-01
Series:Digital Health
Online Access:https://doi.org/10.1177/20552076251349616
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1849429324500828160
author Yirou Niu
Shuojin Fu
Zehui Xuan
Ruifu Kang
Zhifang Ren
Shuai Jin
Yanling Wang
Qian Xiao
author_facet Yirou Niu
Shuojin Fu
Zehui Xuan
Ruifu Kang
Zhifang Ren
Shuai Jin
Yanling Wang
Qian Xiao
author_sort Yirou Niu
collection DOAJ
description Objective To investigate the performance (accuracy, comprehensiveness, consistency, and the necessary information ratio) of large language models (LLMs) in providing knowledge related to respiratory aspiration, and to explore the potential of using LLMs as training tools. Methods This study was a non-human-subject evaluative research. Two LLMs (GPT-3.5 and GPT-4) were asked 36 questions (32 objective questions and four subjective questions) about respiratory aspiration in English and Chinese. Responses were scored by two experts against gold standards derived from authoritative books. The accuracy of the two LLMs’ responses of objective questions were compared by chi-square test or Fisher exact probability method. For subjective questions, the t-test or Mann–Whitney U test was used to compare the differences between two LLMs. Results There was no significant difference in the ratings provided by the two experts. The accuracy scores of objective questions of two LLMs were high. LLMs also performed well on subjective questions, showing high levels of accuracy, comprehensiveness, consistency, and necessary information ratio. And no significant differences were found in the accuracy of the English and Chinese responses to subjective questions between the two LLMs (z = 0.331, p  = 0.886; z = 1.703, p  = 0.114). There was no significant difference in the comprehensiveness of the English and Chinese responses between the two LLMs (t = 0.787, p  = 0.461; t = 1.175, p  = 0.285). Conclusions LLMs demonstrated promising performance in delivering respiratory aspiration-related knowledge and showed promise as supportive tools in training, particularly when their limitations were well understood.
format Article
id doaj-art-ecf2abb58ff24e1fbd0829f98af36db5
institution Kabale University
issn 2055-2076
language English
publishDate 2025-07-01
publisher SAGE Publishing
record_format Article
series Digital Health
spelling doaj-art-ecf2abb58ff24e1fbd0829f98af36db52025-08-20T03:28:24ZengSAGE PublishingDigital Health2055-20762025-07-011110.1177/20552076251349616To take a different approach: Can large language models provide knowledge related to respiratory aspiration?Yirou NiuShuojin FuZehui XuanRuifu KangZhifang RenShuai JinYanling WangQian XiaoObjective To investigate the performance (accuracy, comprehensiveness, consistency, and the necessary information ratio) of large language models (LLMs) in providing knowledge related to respiratory aspiration, and to explore the potential of using LLMs as training tools. Methods This study was a non-human-subject evaluative research. Two LLMs (GPT-3.5 and GPT-4) were asked 36 questions (32 objective questions and four subjective questions) about respiratory aspiration in English and Chinese. Responses were scored by two experts against gold standards derived from authoritative books. The accuracy of the two LLMs’ responses of objective questions were compared by chi-square test or Fisher exact probability method. For subjective questions, the t-test or Mann–Whitney U test was used to compare the differences between two LLMs. Results There was no significant difference in the ratings provided by the two experts. The accuracy scores of objective questions of two LLMs were high. LLMs also performed well on subjective questions, showing high levels of accuracy, comprehensiveness, consistency, and necessary information ratio. And no significant differences were found in the accuracy of the English and Chinese responses to subjective questions between the two LLMs (z = 0.331, p  = 0.886; z = 1.703, p  = 0.114). There was no significant difference in the comprehensiveness of the English and Chinese responses between the two LLMs (t = 0.787, p  = 0.461; t = 1.175, p  = 0.285). Conclusions LLMs demonstrated promising performance in delivering respiratory aspiration-related knowledge and showed promise as supportive tools in training, particularly when their limitations were well understood.https://doi.org/10.1177/20552076251349616
spellingShingle Yirou Niu
Shuojin Fu
Zehui Xuan
Ruifu Kang
Zhifang Ren
Shuai Jin
Yanling Wang
Qian Xiao
To take a different approach: Can large language models provide knowledge related to respiratory aspiration?
Digital Health
title To take a different approach: Can large language models provide knowledge related to respiratory aspiration?
title_full To take a different approach: Can large language models provide knowledge related to respiratory aspiration?
title_fullStr To take a different approach: Can large language models provide knowledge related to respiratory aspiration?
title_full_unstemmed To take a different approach: Can large language models provide knowledge related to respiratory aspiration?
title_short To take a different approach: Can large language models provide knowledge related to respiratory aspiration?
title_sort to take a different approach can large language models provide knowledge related to respiratory aspiration
url https://doi.org/10.1177/20552076251349616
work_keys_str_mv AT yirouniu totakeadifferentapproachcanlargelanguagemodelsprovideknowledgerelatedtorespiratoryaspiration
AT shuojinfu totakeadifferentapproachcanlargelanguagemodelsprovideknowledgerelatedtorespiratoryaspiration
AT zehuixuan totakeadifferentapproachcanlargelanguagemodelsprovideknowledgerelatedtorespiratoryaspiration
AT ruifukang totakeadifferentapproachcanlargelanguagemodelsprovideknowledgerelatedtorespiratoryaspiration
AT zhifangren totakeadifferentapproachcanlargelanguagemodelsprovideknowledgerelatedtorespiratoryaspiration
AT shuaijin totakeadifferentapproachcanlargelanguagemodelsprovideknowledgerelatedtorespiratoryaspiration
AT yanlingwang totakeadifferentapproachcanlargelanguagemodelsprovideknowledgerelatedtorespiratoryaspiration
AT qianxiao totakeadifferentapproachcanlargelanguagemodelsprovideknowledgerelatedtorespiratoryaspiration