Evaluating generative AI for qualitative data extraction in community-based fisheries management literature
Abstract Uptake of AI tools in knowledge production processes is rapidly growing. In this pilot study, we explore the ability of generative AI tools to reliably extract qualitative data from a limited sample of peer-reviewed documents in the context of community-based fisheries management (CBFM) lit...
Saved in:
| Main Authors: | , , , , , , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
BMC
2025-06-01
|
| Series: | Environmental Evidence |
| Subjects: | |
| Online Access: | https://doi.org/10.1186/s13750-025-00362-9 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Abstract Uptake of AI tools in knowledge production processes is rapidly growing. In this pilot study, we explore the ability of generative AI tools to reliably extract qualitative data from a limited sample of peer-reviewed documents in the context of community-based fisheries management (CBFM) literature. Specifically, we evaluate the capacity of multiple AI tools to analyse 33 CBFM papers and extract relevant information for a systematic literature review, comparing the results to those of human reviewers. We address how well AI tools can discern the presence of relevant contextual data, whether the outputs of AI tools are comparable to human extractions, and whether the difficulty of question influences the performance of the extraction. While the AI tools we tested (GPT4-Turbo and Elicit) were not reliable in discerning the presence or absence of contextual data, at least one of the AI tools consistently returned responses that were on par with human reviewers. These results highlight the potential utility of AI tools in the extraction phase of evidence synthesis for supporting human-led reviews, while underscoring the ongoing need for human oversight. This exploratory investigation provides initial insights into the current capabilities and limitations of AI in qualitative data extraction within the specific domain of CBFM, laying groundwork for future, more comprehensive evaluations across diverse fields and larger datasets. |
|---|---|
| ISSN: | 2047-2382 |