Incremental accumulation of linguistic context in artificial and biological neural networks
Abstract Large Language Models (LLMs) have shown success in predicting neural signals associated with narrative processing, but their approach to integrating context over large timescales differs fundamentally from that of the human brain. In this study, we show how the brain, unlike LLMs that proce...
Saved in:
Main Authors: | Refael Tikochinski, Ariel Goldstein, Yoav Meiri, Uri Hasson, Roi Reichart |
---|---|
Format: | Article |
Language: | English |
Published: |
Nature Portfolio
2025-01-01
|
Series: | Nature Communications |
Online Access: | https://doi.org/10.1038/s41467-025-56162-9 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Estimation of Maximum Daily Fresh Snow Accumulation Using an Artificial Neural Network Model
by: Gun Lee, et al.
Published: (2019-01-01) -
Compensating Sparse-view Inline Computed Tomography Artifacts with Neural Representation and Incremental Forward-Backward Network Architecture
by: Manuel Buchfink, et al.
Published: (2025-02-01) -
Design of an Incremental Music Teaching and Assisted Therapy System Based on Artificial Intelligence Attention Mechanism
by: Dapeng Li, et al.
Published: (2022-01-01) -
Neural Linguistic Steganalysis via Multi-Head Self-Attention
by: Sai-Mei Jiao, et al.
Published: (2021-01-01) -
ECG Prediction Based on Classification via Neural Networks and Linguistic Fuzzy Logic Forecaster
by: Eva Volna, et al.
Published: (2015-01-01)