Modeling rapid language learning by distilling Bayesian priors into artificial neural networks

Abstract Humans can learn languages from remarkably little experience. Developing computational models that explain this ability has been a major challenge in cognitive science. Existing approaches have been successful at explaining how humans generalize rapidly in controlled settings but are usuall...

Full description

Saved in:
Bibliographic Details
Main Authors: R. Thomas McCoy, Thomas L. Griffiths
Format: Article
Language:English
Published: Nature Portfolio 2025-05-01
Series:Nature Communications
Online Access:https://doi.org/10.1038/s41467-025-59957-y
Tags: Add Tag
No Tags, Be the first to tag this record!