Compression-enabled interpretability of voxelwise encoding models.

Voxelwise encoding models based on convolutional neural networks (CNNs) are widely used as predictive models of brain activity evoked by natural movies. Despite their superior predictive performance, the huge number of parameters in CNN-based models have made them difficult to interpret. Here, we in...

Full description

Saved in:
Bibliographic Details
Main Authors: Fatemeh Kamali, Amir Abolfazl Suratgar, Mohammadbagher Menhaj, Reza Abbasi-Asl
Format: Article
Language:English
Published: Public Library of Science (PLoS) 2025-02-01
Series:PLoS Computational Biology
Online Access:https://doi.org/10.1371/journal.pcbi.1012822
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Voxelwise encoding models based on convolutional neural networks (CNNs) are widely used as predictive models of brain activity evoked by natural movies. Despite their superior predictive performance, the huge number of parameters in CNN-based models have made them difficult to interpret. Here, we investigate whether model compression can build more interpretable and more stable CNN-based voxelwise models while maintaining accuracy. We used multiple compression techniques to prune less important CNN filters and connections, a receptive field compression method to select receptive fields with optimal center and size, and principal component analysis to reduce dimensionality. We demonstrate that the model compression improves the accuracy of identifying visual stimuli in a hold-out test set. Additionally, compressed models offer a more stable interpretation of voxelwise pattern selectivity than uncompressed models. Finally, the receptive field-compressed models reveal that the optimal model-based population receptive fields become larger and more centralized along the ventral visual pathway. Overall, our findings support using model compression to build more interpretable voxelwise models.
ISSN:1553-734X
1553-7358