Comparing the performance of a large language model and naive human interviewers in interviewing children about a witnessed mock-event.
<h4>Purpose</h4>The present study compared the performance of a Large Language Model (LLM; ChatGPT) and human interviewers in interviewing children about a mock-event they witnessed.<h4>Methods</h4>Children aged 6-8 (N = 78) were randomly assigned to the LLM (n = 40) or the...
Saved in:
| Main Authors: | Yongjie Sun, Haohai Pang, Liisa Järvilehto, Ophelia Zhang, David Shapiro, Julia Korkman, Shumpei Haginoya, Pekka Santtila |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Public Library of Science (PLoS)
2025-01-01
|
| Series: | PLoS ONE |
| Online Access: | https://doi.org/10.1371/journal.pone.0316317 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Scalable training for child sexual abuse interviews in Japan: Using AI-driven avatars to test multiple behavioral modeling interventions
by: Shumpei Haginoya, et al.
Published: (2025-07-01) -
Large language models' knowledge of children's memory and suggestibility: Evaluating model predictions of prior experimental results
by: Pekka Santtila, et al.
Published: (2025-08-01) -
Gaze aversion in conversational settings: An investigation based on mock job interview
by: Cengiz Acarturk, et al.
Published: (2021-05-01) -
Emotional engagement and perceived empathy in live vs. automated psychological interviews.
by: Thomas J Nyman, et al.
Published: (2025-01-01) -
From Engineering Student to Engineering Professional: Analyzing Discursive Engineering Identity Enacted in Mock Job Interviews
by: Andrew Olewnik, et al.
Published: (2025-04-01)