Comparing large language models for supervised analysis of students’ lab notes
Recent advancements in large language models (LLMs) hold significant promise for improving physics education research that uses machine learning. In this study, we compare the application of various models for conducting a large-scale analysis of written text grounded in a physics education research...
Saved in:
| Main Authors: | Rebeckah K. Fussell, Megan Flynn, Anil Damle, Michael F. J. Fox, N. G. Holmes |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
American Physical Society
2025-03-01
|
| Series: | Physical Review Physics Education Research |
| Online Access: | http://doi.org/10.1103/PhysRevPhysEducRes.21.010128 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Improving Large Language Models’ Summarization Accuracy by Adding Highlights to Discharge Notes: Comparative Evaluation
by: Mahshad Koohi Habibi Dehkordi, et al.
Published: (2025-07-01) -
Structuring groups for gender equitable equipment usage in labs
by: Matthew Dew, et al.
Published: (2025-06-01) -
Supervised Natural Language Processing Classification of Violent Death Narratives: Development and Assessment of a Compact Large Language Model
by: Susan T Parker
Published: (2025-06-01) -
Library Video Tutorials to Support Large Undergraduate Labs: Will They Watch?
by: April L. Colosimo, et al.
Published: (2012-03-01) -
Note on Melitaea Phaeton
by: Holmes Hinkley
Published: (1888-01-01)