Incremental accumulation of linguistic context in artificial and biological neural networks
Abstract Large Language Models (LLMs) have shown success in predicting neural signals associated with narrative processing, but their approach to integrating context over large timescales differs fundamentally from that of the human brain. In this study, we show how the brain, unlike LLMs that proce...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Nature Portfolio
2025-01-01
|
Series: | Nature Communications |
Online Access: | https://doi.org/10.1038/s41467-025-56162-9 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1832594599731789824 |
---|---|
author | Refael Tikochinski Ariel Goldstein Yoav Meiri Uri Hasson Roi Reichart |
author_facet | Refael Tikochinski Ariel Goldstein Yoav Meiri Uri Hasson Roi Reichart |
author_sort | Refael Tikochinski |
collection | DOAJ |
description | Abstract Large Language Models (LLMs) have shown success in predicting neural signals associated with narrative processing, but their approach to integrating context over large timescales differs fundamentally from that of the human brain. In this study, we show how the brain, unlike LLMs that process large text windows in parallel, integrates short-term and long-term contextual information through an incremental mechanism. Using fMRI data from 219 participants listening to spoken narratives, we first demonstrate that LLMs predict brain activity effectively only when using short contextual windows of up to a few dozen words. Next, we introduce an alternative LLM-based incremental-context model that combines incoming short-term context with an aggregated, dynamically updated summary of prior context. This model significantly enhances the prediction of neural activity in higher-order regions involved in long-timescale processing. Our findings reveal how the brain’s hierarchical temporal processing mechanisms enable the flexible integration of information over time, providing valuable insights for both cognitive neuroscience and AI development. |
format | Article |
id | doaj-art-10fd7ae1ad4a48dc96cbf8faf6598409 |
institution | Kabale University |
issn | 2041-1723 |
language | English |
publishDate | 2025-01-01 |
publisher | Nature Portfolio |
record_format | Article |
series | Nature Communications |
spelling | doaj-art-10fd7ae1ad4a48dc96cbf8faf65984092025-01-19T12:29:59ZengNature PortfolioNature Communications2041-17232025-01-0116111110.1038/s41467-025-56162-9Incremental accumulation of linguistic context in artificial and biological neural networksRefael Tikochinski0Ariel Goldstein1Yoav Meiri2Uri Hasson3Roi Reichart4The Faculty of Data and Decisions Sciences, Technion - Israel Institute of TechnologyDepartment of Cognitive and Brain Sciences, The Hebrew University of JerusalemThe Faculty of Data and Decisions Sciences, Technion - Israel Institute of TechnologyDepartment of Psychology, Princeton UniversityThe Faculty of Data and Decisions Sciences, Technion - Israel Institute of TechnologyAbstract Large Language Models (LLMs) have shown success in predicting neural signals associated with narrative processing, but their approach to integrating context over large timescales differs fundamentally from that of the human brain. In this study, we show how the brain, unlike LLMs that process large text windows in parallel, integrates short-term and long-term contextual information through an incremental mechanism. Using fMRI data from 219 participants listening to spoken narratives, we first demonstrate that LLMs predict brain activity effectively only when using short contextual windows of up to a few dozen words. Next, we introduce an alternative LLM-based incremental-context model that combines incoming short-term context with an aggregated, dynamically updated summary of prior context. This model significantly enhances the prediction of neural activity in higher-order regions involved in long-timescale processing. Our findings reveal how the brain’s hierarchical temporal processing mechanisms enable the flexible integration of information over time, providing valuable insights for both cognitive neuroscience and AI development.https://doi.org/10.1038/s41467-025-56162-9 |
spellingShingle | Refael Tikochinski Ariel Goldstein Yoav Meiri Uri Hasson Roi Reichart Incremental accumulation of linguistic context in artificial and biological neural networks Nature Communications |
title | Incremental accumulation of linguistic context in artificial and biological neural networks |
title_full | Incremental accumulation of linguistic context in artificial and biological neural networks |
title_fullStr | Incremental accumulation of linguistic context in artificial and biological neural networks |
title_full_unstemmed | Incremental accumulation of linguistic context in artificial and biological neural networks |
title_short | Incremental accumulation of linguistic context in artificial and biological neural networks |
title_sort | incremental accumulation of linguistic context in artificial and biological neural networks |
url | https://doi.org/10.1038/s41467-025-56162-9 |
work_keys_str_mv | AT refaeltikochinski incrementalaccumulationoflinguisticcontextinartificialandbiologicalneuralnetworks AT arielgoldstein incrementalaccumulationoflinguisticcontextinartificialandbiologicalneuralnetworks AT yoavmeiri incrementalaccumulationoflinguisticcontextinartificialandbiologicalneuralnetworks AT urihasson incrementalaccumulationoflinguisticcontextinartificialandbiologicalneuralnetworks AT roireichart incrementalaccumulationoflinguisticcontextinartificialandbiologicalneuralnetworks |