Language models outperform cloze predictability in a cognitive model of reading.
Although word predictability is commonly considered an important factor in reading, sophisticated accounts of predictability in theories of reading are lacking. Computational models of reading traditionally use cloze norming as a proxy of word predictability, but what cloze norms precisely capture r...
Saved in:
| Main Authors: | Adrielli Tina Lopes Rego, Joshua Snell, Martijn Meeter |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Public Library of Science (PLoS)
2024-09-01
|
| Series: | PLoS Computational Biology |
| Online Access: | https://doi.org/10.1371/journal.pcbi.1012117 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
The Use of the Cloze Test in Reading Comprehension Assessment in Brazil: Post-Pandemic Challenges
by: Flávia Oliveira Freitas, et al.
Published: (2025-05-01) -
Cloze Testing as an Alternative to the Conventional Exam in E.B.E.
by: Honesto Herrera Soler
Published: (1993-12-01) -
Large Language Models Outperform Traditional Natural Language Processing Methods in Extracting Patient-Reported Outcomes in Inflammatory Bowel Disease
by: Perseus V. Patel, et al.
Published: (2025-01-01) -
Do domain-specific protein language models outperform general models on immunology-related tasks?
by: Nicolas Deutschmann, et al.
Published: (2024-06-01) -
Regularized regression outperforms trees for predicting cognitive function in the Health and Retirement Study
by: Kyle Masato Ishikawa, et al.
Published: (2025-09-01)