Time-frequency feature calculation of multi-stage audiovisual neural processing via electroencephalogram microstates
IntroductionAudiovisual (AV) perception is a fundamental modality for environmental cognition and social communication, involving complex, non-linear multisensory processing of large-scale neuronal activity modulated by attention. However, precise characterization of the underlying AV processing dyn...
Saved in:
| Main Authors: | , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Frontiers Media S.A.
2025-08-01
|
| Series: | Frontiers in Neuroscience |
| Subjects: | |
| Online Access: | https://www.frontiersin.org/articles/10.3389/fnins.2025.1643554/full |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1850036401468342272 |
|---|---|
| author | Yang Xi Lu Zhang Cunzhen Li Xiaopeng Lv Zhu Lan |
| author_facet | Yang Xi Lu Zhang Cunzhen Li Xiaopeng Lv Zhu Lan |
| author_sort | Yang Xi |
| collection | DOAJ |
| description | IntroductionAudiovisual (AV) perception is a fundamental modality for environmental cognition and social communication, involving complex, non-linear multisensory processing of large-scale neuronal activity modulated by attention. However, precise characterization of the underlying AV processing dynamics remains elusive.MethodsWe designed an AV semantic discrimination task to acquire electroencephalogram (EEG) data under attended and unattended conditions. To temporally resolve the neural processing stages, we developed an EEG microstate-based analysis method. This involved segmenting the EEG into functional sub-stages by applying hierarchical clustering to global field power-peak topographic maps. The optimal number of microstate classes was determined using the Krzanowski-Lai criterion and Global Explained Variance evaluation. We analyzed filtered EEG data across frequency bands to quantify microstate attributes (e.g., duration, occurrence, coverage, transition probabilities), deriving comprehensive time-frequency features. These features were then used to classify processing states with multiple machine learning models.ResultsDistinct, temporally continuous microstate sequences were identified characterizing attended versus unattended AV processing. The analysis of microstate attributes yielded time-frequency features that achieved high classification accuracy: 97.8% for distinguishing attended vs. unattended states and 98.6% for discriminating unimodal (auditory or visual) versus multimodal (AV) processing across the employed machine learning models.DiscussionOur EEG microstate-based method effectively characterizes the spatio-temporal dynamics of AV processing. Furthermore, it provides neurophysiologically interpretable explanations for the highly accurate classification outcomes, offering significant insights into the neural mechanisms underlying attended and unattended multisensory integration. |
| format | Article |
| id | doaj-art-d2688a9f011d428a964bf4fada151fc9 |
| institution | DOAJ |
| issn | 1662-453X |
| language | English |
| publishDate | 2025-08-01 |
| publisher | Frontiers Media S.A. |
| record_format | Article |
| series | Frontiers in Neuroscience |
| spelling | doaj-art-d2688a9f011d428a964bf4fada151fc92025-08-20T02:57:08ZengFrontiers Media S.A.Frontiers in Neuroscience1662-453X2025-08-011910.3389/fnins.2025.16435541643554Time-frequency feature calculation of multi-stage audiovisual neural processing via electroencephalogram microstatesYang Xi0Lu Zhang1Cunzhen Li2Xiaopeng Lv3Zhu Lan4School of Computer Science, Northeast Electric Power University, Jilin, ChinaSchool of Computer Science, Northeast Electric Power University, Jilin, ChinaSchool of Computer Science, Northeast Electric Power University, Jilin, ChinaDepartment of Chemoradiotherapy, Jilin City Hospital of Chemical Industry, Jilin, ChinaSchool of Computer Science, Northeast Electric Power University, Jilin, ChinaIntroductionAudiovisual (AV) perception is a fundamental modality for environmental cognition and social communication, involving complex, non-linear multisensory processing of large-scale neuronal activity modulated by attention. However, precise characterization of the underlying AV processing dynamics remains elusive.MethodsWe designed an AV semantic discrimination task to acquire electroencephalogram (EEG) data under attended and unattended conditions. To temporally resolve the neural processing stages, we developed an EEG microstate-based analysis method. This involved segmenting the EEG into functional sub-stages by applying hierarchical clustering to global field power-peak topographic maps. The optimal number of microstate classes was determined using the Krzanowski-Lai criterion and Global Explained Variance evaluation. We analyzed filtered EEG data across frequency bands to quantify microstate attributes (e.g., duration, occurrence, coverage, transition probabilities), deriving comprehensive time-frequency features. These features were then used to classify processing states with multiple machine learning models.ResultsDistinct, temporally continuous microstate sequences were identified characterizing attended versus unattended AV processing. The analysis of microstate attributes yielded time-frequency features that achieved high classification accuracy: 97.8% for distinguishing attended vs. unattended states and 98.6% for discriminating unimodal (auditory or visual) versus multimodal (AV) processing across the employed machine learning models.DiscussionOur EEG microstate-based method effectively characterizes the spatio-temporal dynamics of AV processing. Furthermore, it provides neurophysiologically interpretable explanations for the highly accurate classification outcomes, offering significant insights into the neural mechanisms underlying attended and unattended multisensory integration.https://www.frontiersin.org/articles/10.3389/fnins.2025.1643554/fullaudiovisual processingelectroencephalographymicrostatestime-frequency featuresattentional mechanism |
| spellingShingle | Yang Xi Lu Zhang Cunzhen Li Xiaopeng Lv Zhu Lan Time-frequency feature calculation of multi-stage audiovisual neural processing via electroencephalogram microstates Frontiers in Neuroscience audiovisual processing electroencephalography microstates time-frequency features attentional mechanism |
| title | Time-frequency feature calculation of multi-stage audiovisual neural processing via electroencephalogram microstates |
| title_full | Time-frequency feature calculation of multi-stage audiovisual neural processing via electroencephalogram microstates |
| title_fullStr | Time-frequency feature calculation of multi-stage audiovisual neural processing via electroencephalogram microstates |
| title_full_unstemmed | Time-frequency feature calculation of multi-stage audiovisual neural processing via electroencephalogram microstates |
| title_short | Time-frequency feature calculation of multi-stage audiovisual neural processing via electroencephalogram microstates |
| title_sort | time frequency feature calculation of multi stage audiovisual neural processing via electroencephalogram microstates |
| topic | audiovisual processing electroencephalography microstates time-frequency features attentional mechanism |
| url | https://www.frontiersin.org/articles/10.3389/fnins.2025.1643554/full |
| work_keys_str_mv | AT yangxi timefrequencyfeaturecalculationofmultistageaudiovisualneuralprocessingviaelectroencephalogrammicrostates AT luzhang timefrequencyfeaturecalculationofmultistageaudiovisualneuralprocessingviaelectroencephalogrammicrostates AT cunzhenli timefrequencyfeaturecalculationofmultistageaudiovisualneuralprocessingviaelectroencephalogrammicrostates AT xiaopenglv timefrequencyfeaturecalculationofmultistageaudiovisualneuralprocessingviaelectroencephalogrammicrostates AT zhulan timefrequencyfeaturecalculationofmultistageaudiovisualneuralprocessingviaelectroencephalogrammicrostates |