Artificial intelligence's contribution to biomedical literature search: revolutionizing or complicating?

There is a growing number of articles about conversational AI (i.e., ChatGPT) for generating scientific literature reviews and summaries. Yet, comparative evidence lags its wide adoption by many clinicians and researchers. We explored ChatGPT's utility for literature search from an end-user per...

Full description

Saved in:
Bibliographic Details
Main Authors: Rui Yip, Young Joo Sun, Alexander G Bassuk, Vinit B Mahajan
Format: Article
Language:English
Published: Public Library of Science (PLoS) 2025-05-01
Series:PLOS Digital Health
Online Access:https://doi.org/10.1371/journal.pdig.0000849
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1849327814398967808
author Rui Yip
Young Joo Sun
Alexander G Bassuk
Vinit B Mahajan
author_facet Rui Yip
Young Joo Sun
Alexander G Bassuk
Vinit B Mahajan
author_sort Rui Yip
collection DOAJ
description There is a growing number of articles about conversational AI (i.e., ChatGPT) for generating scientific literature reviews and summaries. Yet, comparative evidence lags its wide adoption by many clinicians and researchers. We explored ChatGPT's utility for literature search from an end-user perspective through the lens of clinicians and biomedical researchers. We quantitatively compared basic versions of ChatGPT's utility against conventional search methods such as Google and PubMed. We further tested whether ChatGPT user-support tools (i.e., plugins, web-browsing function, prompt-engineering, and custom-GPTs) could improve its response across four common and practical literature search scenarios: (1) high-interest topics with an abundance of information, (2) niche topics with limited information, (3) scientific hypothesis generation, and (4) for newly emerging clinical practices questions. Our results demonstrated that basic ChatGPT functions had limitations in consistency, accuracy, and relevancy. User-support tools showed improvements, but the limitations persisted. Interestingly, each literature search scenario posed different challenges: an abundance of secondary information sources in high interest topics, and uncompelling literatures for new/niche topics. This study tested practical examples highlighting both the potential and the pitfalls of integrating conversational AI into literature search processes, and underscores the necessity for rigorous comparative assessments of AI tools in scientific research.
format Article
id doaj-art-53477486debf45cea9b72fecc2b7cba3
institution Kabale University
issn 2767-3170
language English
publishDate 2025-05-01
publisher Public Library of Science (PLoS)
record_format Article
series PLOS Digital Health
spelling doaj-art-53477486debf45cea9b72fecc2b7cba32025-08-20T03:47:45ZengPublic Library of Science (PLoS)PLOS Digital Health2767-31702025-05-0145e000084910.1371/journal.pdig.0000849Artificial intelligence's contribution to biomedical literature search: revolutionizing or complicating?Rui YipYoung Joo SunAlexander G BassukVinit B MahajanThere is a growing number of articles about conversational AI (i.e., ChatGPT) for generating scientific literature reviews and summaries. Yet, comparative evidence lags its wide adoption by many clinicians and researchers. We explored ChatGPT's utility for literature search from an end-user perspective through the lens of clinicians and biomedical researchers. We quantitatively compared basic versions of ChatGPT's utility against conventional search methods such as Google and PubMed. We further tested whether ChatGPT user-support tools (i.e., plugins, web-browsing function, prompt-engineering, and custom-GPTs) could improve its response across four common and practical literature search scenarios: (1) high-interest topics with an abundance of information, (2) niche topics with limited information, (3) scientific hypothesis generation, and (4) for newly emerging clinical practices questions. Our results demonstrated that basic ChatGPT functions had limitations in consistency, accuracy, and relevancy. User-support tools showed improvements, but the limitations persisted. Interestingly, each literature search scenario posed different challenges: an abundance of secondary information sources in high interest topics, and uncompelling literatures for new/niche topics. This study tested practical examples highlighting both the potential and the pitfalls of integrating conversational AI into literature search processes, and underscores the necessity for rigorous comparative assessments of AI tools in scientific research.https://doi.org/10.1371/journal.pdig.0000849
spellingShingle Rui Yip
Young Joo Sun
Alexander G Bassuk
Vinit B Mahajan
Artificial intelligence's contribution to biomedical literature search: revolutionizing or complicating?
PLOS Digital Health
title Artificial intelligence's contribution to biomedical literature search: revolutionizing or complicating?
title_full Artificial intelligence's contribution to biomedical literature search: revolutionizing or complicating?
title_fullStr Artificial intelligence's contribution to biomedical literature search: revolutionizing or complicating?
title_full_unstemmed Artificial intelligence's contribution to biomedical literature search: revolutionizing or complicating?
title_short Artificial intelligence's contribution to biomedical literature search: revolutionizing or complicating?
title_sort artificial intelligence s contribution to biomedical literature search revolutionizing or complicating
url https://doi.org/10.1371/journal.pdig.0000849
work_keys_str_mv AT ruiyip artificialintelligencescontributiontobiomedicalliteraturesearchrevolutionizingorcomplicating
AT youngjoosun artificialintelligencescontributiontobiomedicalliteraturesearchrevolutionizingorcomplicating
AT alexandergbassuk artificialintelligencescontributiontobiomedicalliteraturesearchrevolutionizingorcomplicating
AT vinitbmahajan artificialintelligencescontributiontobiomedicalliteraturesearchrevolutionizingorcomplicating