Exploring large language models for summarizing and interpreting an online brain tumor support forum
Objective This study explored the capabilities of large language models (LLMs) GPT-3.5, GPT-4, and Llama 3 to summarize qualitative data from an online brain tumor support forum, assessing the differences between these methods and traditional thematic analysis. Methods Eight posts and responses were...
Saved in:
| Main Authors: | , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
SAGE Publishing
2025-04-01
|
| Series: | Digital Health |
| Online Access: | https://doi.org/10.1177/20552076251337345 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Objective This study explored the capabilities of large language models (LLMs) GPT-3.5, GPT-4, and Llama 3 to summarize qualitative data from an online brain tumor support forum, assessing the differences between these methods and traditional thematic analysis. Methods Eight posts and responses were collected in September 2024 from the American Brain Tumor Association Brain Tumor Support Group, using the passive/unobtrusive method. The data were analyzed using two methods: (1) traditional thematic coding with Dedoose software and (2) summarization and interpretation using LLMs. Prompts guided the LLMs in generating summaries and identifying key challenges, with results evaluated using the metrics BLEU, ROUGE-1, ROUGE-2, ROUGE-L, METEOR, and BERTScore (f1). Flesch-Kincaid grade levels and readability ease scores were also calculated and compared. Results GPT-4 demonstrated superior performance across ROUGE and METEOR metrics, outperforming GPT-3.5 and Llama 3. Semantic similarity scores were comparable across models. GPT-4's capacity to process entire transcripts increased efficiency, while GPT-3.5 and Llama 3 required data segmenting. Summaries produced by LLMs aligned closely with human-generated thematic analysis, with significant reductions in time and labor. Conclusion LLMs, particularly GPT-4, show strong potential for summarizing complex, qualitative health data, offering time-efficient and consistent outputs. These tools may enhance research efficiency and support in patient-centered environments. However, challenges such as training data biases and capacity limitations in some models warrant further investigation. |
|---|---|
| ISSN: | 2055-2076 |