AI in Qualitative Health Research Appraisal: Comparative Study

Abstract BackgroundQualitative research appraisal is crucial for ensuring credible findings but faces challenges due to human variability. Artificial intelligence (AI) models have the potential to enhance the efficiency and consistency of qualitative research assessments....

Full description

Saved in:
Bibliographic Details
Main Author: August Landerholm
Format: Article
Language:English
Published: JMIR Publications 2025-07-01
Series:JMIR Formative Research
Online Access:https://formative.jmir.org/2025/1/e72815
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1849318491576860672
author August Landerholm
author_facet August Landerholm
author_sort August Landerholm
collection DOAJ
description Abstract BackgroundQualitative research appraisal is crucial for ensuring credible findings but faces challenges due to human variability. Artificial intelligence (AI) models have the potential to enhance the efficiency and consistency of qualitative research assessments. ObjectiveThis study aims to evaluate the performance of 5 AI models (GPT-3.5, Claude 3.5, Sonar Huge, GPT-4, and Claude 3 Opus) in assessing the quality of qualitative research using 3 standardized tools: Critical Appraisal Skills Programme (CASP), Joanna Briggs Institute (JBI) checklist, and Evaluative Tools for Qualitative Studies (ETQS). MethodsAI-generated assessments of 3 peer-reviewed qualitative papers in health and physical activity–related research were analyzed. The study examined systematic affirmation bias, interrater reliability, and tool-dependent disagreements across the AI models. Sensitivity analysis was conducted to evaluate the impact of excluding specific models on agreement levels. ResultsResults revealed a systematic affirmation bias across all AI models, with “Yes” rates ranging from 75.9% (145/191; Claude 3 Opus) to 85.4% (164/192; Claude 3.5). GPT-4 diverged significantly, showing lower agreement (“Yes”: 115/192, 59.9%) and higher uncertainty (“Cannot tell”: 69/192, 35.9%). Proprietary models (GPT-3.5 and Claude 3.5) demonstrated near-perfect alignment (Cramer VP ConclusionsThe findings demonstrate that AI models exhibit both promise and limitations as evaluators of qualitative research quality. While they enhance efficiency, AI models struggle with reaching consensus in areas requiring nuanced interpretation, particularly for contextual criteria. The study underscores the importance of hybrid frameworks that integrate AI scalability with human oversight, especially for contextual judgment. Future research should prioritize developing AI training protocols that emphasize qualitative epistemology, benchmarking AI performance against expert panels to validate accuracy thresholds, and establishing ethical guidelines for disclosing AI’s role in systematic reviews. As qualitative methodologies evolve alongside AI capabilities, the path forward lies in collaborative human-AI workflows that leverage AI’s efficiency while preserving human expertise for interpretive tasks.
format Article
id doaj-art-c1fe6ef8f1014709858b6fa1c448656c
institution Kabale University
issn 2561-326X
language English
publishDate 2025-07-01
publisher JMIR Publications
record_format Article
series JMIR Formative Research
spelling doaj-art-c1fe6ef8f1014709858b6fa1c448656c2025-08-20T03:50:48ZengJMIR PublicationsJMIR Formative Research2561-326X2025-07-019e72815e7281510.2196/72815AI in Qualitative Health Research Appraisal: Comparative StudyAugust Landerholmhttp://orcid.org/0009-0006-6366-8157 Abstract BackgroundQualitative research appraisal is crucial for ensuring credible findings but faces challenges due to human variability. Artificial intelligence (AI) models have the potential to enhance the efficiency and consistency of qualitative research assessments. ObjectiveThis study aims to evaluate the performance of 5 AI models (GPT-3.5, Claude 3.5, Sonar Huge, GPT-4, and Claude 3 Opus) in assessing the quality of qualitative research using 3 standardized tools: Critical Appraisal Skills Programme (CASP), Joanna Briggs Institute (JBI) checklist, and Evaluative Tools for Qualitative Studies (ETQS). MethodsAI-generated assessments of 3 peer-reviewed qualitative papers in health and physical activity–related research were analyzed. The study examined systematic affirmation bias, interrater reliability, and tool-dependent disagreements across the AI models. Sensitivity analysis was conducted to evaluate the impact of excluding specific models on agreement levels. ResultsResults revealed a systematic affirmation bias across all AI models, with “Yes” rates ranging from 75.9% (145/191; Claude 3 Opus) to 85.4% (164/192; Claude 3.5). GPT-4 diverged significantly, showing lower agreement (“Yes”: 115/192, 59.9%) and higher uncertainty (“Cannot tell”: 69/192, 35.9%). Proprietary models (GPT-3.5 and Claude 3.5) demonstrated near-perfect alignment (Cramer VP ConclusionsThe findings demonstrate that AI models exhibit both promise and limitations as evaluators of qualitative research quality. While they enhance efficiency, AI models struggle with reaching consensus in areas requiring nuanced interpretation, particularly for contextual criteria. The study underscores the importance of hybrid frameworks that integrate AI scalability with human oversight, especially for contextual judgment. Future research should prioritize developing AI training protocols that emphasize qualitative epistemology, benchmarking AI performance against expert panels to validate accuracy thresholds, and establishing ethical guidelines for disclosing AI’s role in systematic reviews. As qualitative methodologies evolve alongside AI capabilities, the path forward lies in collaborative human-AI workflows that leverage AI’s efficiency while preserving human expertise for interpretive tasks.https://formative.jmir.org/2025/1/e72815
spellingShingle August Landerholm
AI in Qualitative Health Research Appraisal: Comparative Study
JMIR Formative Research
title AI in Qualitative Health Research Appraisal: Comparative Study
title_full AI in Qualitative Health Research Appraisal: Comparative Study
title_fullStr AI in Qualitative Health Research Appraisal: Comparative Study
title_full_unstemmed AI in Qualitative Health Research Appraisal: Comparative Study
title_short AI in Qualitative Health Research Appraisal: Comparative Study
title_sort ai in qualitative health research appraisal comparative study
url https://formative.jmir.org/2025/1/e72815
work_keys_str_mv AT augustlanderholm aiinqualitativehealthresearchappraisalcomparativestudy