Network structure influences the strength of learned neural representations
Abstract From sequences of discrete events, humans build mental models of their world. Referred to as graph learning, the process produces a model encoding the graph of event-to-event transition probabilities. Recent evidence suggests that some networks are easier to learn than others, but the neura...
Saved in:
Main Authors: | , , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Nature Portfolio
2025-01-01
|
Series: | Nature Communications |
Online Access: | https://doi.org/10.1038/s41467-024-55459-5 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1832585541859672064 |
---|---|
author | Ari E. Kahn Karol Szymula Sophie Loman Edda B. Haggerty Nathaniel Nyema Geoffrey K. Aguirre Dani S. Bassett |
author_facet | Ari E. Kahn Karol Szymula Sophie Loman Edda B. Haggerty Nathaniel Nyema Geoffrey K. Aguirre Dani S. Bassett |
author_sort | Ari E. Kahn |
collection | DOAJ |
description | Abstract From sequences of discrete events, humans build mental models of their world. Referred to as graph learning, the process produces a model encoding the graph of event-to-event transition probabilities. Recent evidence suggests that some networks are easier to learn than others, but the neural underpinnings of this effect remain unknown. Here we use fMRI to show that even over short timescales the network structure of a temporal sequence of stimuli determines the fidelity of event representations as well as the dimensionality of the space in which those representations are encoded: when the graph was modular as opposed to lattice-like, BOLD representations in visual areas better predicted trial identity and displayed higher intrinsic dimensionality. Broadly, our study shows that network context influences the strength of learned neural representations, motivating future work in the design, optimization, and adaptation of network contexts for distinct types of learning. |
format | Article |
id | doaj-art-029c106007c646ffa7e826997e9d1f70 |
institution | Kabale University |
issn | 2041-1723 |
language | English |
publishDate | 2025-01-01 |
publisher | Nature Portfolio |
record_format | Article |
series | Nature Communications |
spelling | doaj-art-029c106007c646ffa7e826997e9d1f702025-01-26T12:40:33ZengNature PortfolioNature Communications2041-17232025-01-0116111910.1038/s41467-024-55459-5Network structure influences the strength of learned neural representationsAri E. Kahn0Karol Szymula1Sophie Loman2Edda B. Haggerty3Nathaniel Nyema4Geoffrey K. Aguirre5Dani S. Bassett6Princeton Neuroscience Institute and Department of Psychology, Princeton UniversityMedical Scientist Training Program, University of Rochester School of Medicine and DentistryDepartment of Bioengineering, School of Engineering & Applied Science, University of PennsylvaniaDepartment of Neurology, Perelman School of Medicine, University of PennsylvaniaDepartment of Bioengineering, School of Engineering & Applied Science, University of PennsylvaniaDepartment of Neurology, Perelman School of Medicine, University of PennsylvaniaDepartment of Bioengineering, School of Engineering & Applied Science, University of PennsylvaniaAbstract From sequences of discrete events, humans build mental models of their world. Referred to as graph learning, the process produces a model encoding the graph of event-to-event transition probabilities. Recent evidence suggests that some networks are easier to learn than others, but the neural underpinnings of this effect remain unknown. Here we use fMRI to show that even over short timescales the network structure of a temporal sequence of stimuli determines the fidelity of event representations as well as the dimensionality of the space in which those representations are encoded: when the graph was modular as opposed to lattice-like, BOLD representations in visual areas better predicted trial identity and displayed higher intrinsic dimensionality. Broadly, our study shows that network context influences the strength of learned neural representations, motivating future work in the design, optimization, and adaptation of network contexts for distinct types of learning.https://doi.org/10.1038/s41467-024-55459-5 |
spellingShingle | Ari E. Kahn Karol Szymula Sophie Loman Edda B. Haggerty Nathaniel Nyema Geoffrey K. Aguirre Dani S. Bassett Network structure influences the strength of learned neural representations Nature Communications |
title | Network structure influences the strength of learned neural representations |
title_full | Network structure influences the strength of learned neural representations |
title_fullStr | Network structure influences the strength of learned neural representations |
title_full_unstemmed | Network structure influences the strength of learned neural representations |
title_short | Network structure influences the strength of learned neural representations |
title_sort | network structure influences the strength of learned neural representations |
url | https://doi.org/10.1038/s41467-024-55459-5 |
work_keys_str_mv | AT ariekahn networkstructureinfluencesthestrengthoflearnedneuralrepresentations AT karolszymula networkstructureinfluencesthestrengthoflearnedneuralrepresentations AT sophieloman networkstructureinfluencesthestrengthoflearnedneuralrepresentations AT eddabhaggerty networkstructureinfluencesthestrengthoflearnedneuralrepresentations AT nathanielnyema networkstructureinfluencesthestrengthoflearnedneuralrepresentations AT geoffreykaguirre networkstructureinfluencesthestrengthoflearnedneuralrepresentations AT danisbassett networkstructureinfluencesthestrengthoflearnedneuralrepresentations |