Comparison of Linear and Nonlinear Methods for Decoding Selective Attention to Speech From Ear-EEG Recordings

Many people with hearing loss struggle to comprehend speech in crowded auditory scenes, even when they are using hearing aids. However, the focus of a listener’s selective attention to speech can be decoded from their electroencephalography (EEG) recordings, raising the prospect of smart...

Full description

Saved in:
Bibliographic Details
Main Authors: Mike D. Thornton, Danilo P. Mandic, Tobias Reichenbach
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/11084763/
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1850072024276271104
author Mike D. Thornton
Danilo P. Mandic
Tobias Reichenbach
author_facet Mike D. Thornton
Danilo P. Mandic
Tobias Reichenbach
author_sort Mike D. Thornton
collection DOAJ
description Many people with hearing loss struggle to comprehend speech in crowded auditory scenes, even when they are using hearing aids. However, the focus of a listener’s selective attention to speech can be decoded from their electroencephalography (EEG) recordings, raising the prospect of smart EEG-steered hearing aids which restore speech comprehension in adverse acoustic environments. Here, we assess the feasibility of using a novel, ultra-wearable, ear-EEG device to classify the selective attention of normal-hearing listeners who participated in a two-talker competing-speakers experiment. State-of-the-art auditory attention decoding algorithms are compared, including stimulus-reconstruction algorithms based on linear regression as well as non-linear deep neural networks, and canonical correlation analysis (CCA). Meaningful markers of selective auditory attention could be extracted from the ear-EEG signals of all participants, even when those markers are derived from relatively short EEG segments of just 5 s in duration. Algorithms which relate the EEG signals to the rising edges of the speech temporal envelope are more successful than those which make use of the temporal envelope itself. The CCA algorithm achieves the highest mean attention decoding accuracy, although differences between the performances of the three algorithms are both small and not statistically significant when EEG segments of short durations are employed. In summary, our ultra-wearable ear-EEG device offers promising prospects for wearable auditory monitoring.
format Article
id doaj-art-59815eed44fb407597f87823ca78ebaa
institution DOAJ
issn 2169-3536
language English
publishDate 2025-01-01
publisher IEEE
record_format Article
series IEEE Access
spelling doaj-art-59815eed44fb407597f87823ca78ebaa2025-08-20T02:47:10ZengIEEEIEEE Access2169-35362025-01-011312761412762510.1109/ACCESS.2025.359049011084763Comparison of Linear and Nonlinear Methods for Decoding Selective Attention to Speech From Ear-EEG RecordingsMike D. Thornton0https://orcid.org/0000-0002-2235-5879Danilo P. Mandic1https://orcid.org/0000-0001-8432-3963Tobias Reichenbach2https://orcid.org/0000-0003-3367-3511Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, GermanyDepartment of Electrical and Electronic Engineering, Imperial College London, London, U.K.Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, GermanyMany people with hearing loss struggle to comprehend speech in crowded auditory scenes, even when they are using hearing aids. However, the focus of a listener’s selective attention to speech can be decoded from their electroencephalography (EEG) recordings, raising the prospect of smart EEG-steered hearing aids which restore speech comprehension in adverse acoustic environments. Here, we assess the feasibility of using a novel, ultra-wearable, ear-EEG device to classify the selective attention of normal-hearing listeners who participated in a two-talker competing-speakers experiment. State-of-the-art auditory attention decoding algorithms are compared, including stimulus-reconstruction algorithms based on linear regression as well as non-linear deep neural networks, and canonical correlation analysis (CCA). Meaningful markers of selective auditory attention could be extracted from the ear-EEG signals of all participants, even when those markers are derived from relatively short EEG segments of just 5 s in duration. Algorithms which relate the EEG signals to the rising edges of the speech temporal envelope are more successful than those which make use of the temporal envelope itself. The CCA algorithm achieves the highest mean attention decoding accuracy, although differences between the performances of the three algorithms are both small and not statistically significant when EEG segments of short durations are employed. In summary, our ultra-wearable ear-EEG device offers promising prospects for wearable auditory monitoring.https://ieeexplore.ieee.org/document/11084763/Auditory attention decodingEEGhearables
spellingShingle Mike D. Thornton
Danilo P. Mandic
Tobias Reichenbach
Comparison of Linear and Nonlinear Methods for Decoding Selective Attention to Speech From Ear-EEG Recordings
IEEE Access
Auditory attention decoding
EEG
hearables
title Comparison of Linear and Nonlinear Methods for Decoding Selective Attention to Speech From Ear-EEG Recordings
title_full Comparison of Linear and Nonlinear Methods for Decoding Selective Attention to Speech From Ear-EEG Recordings
title_fullStr Comparison of Linear and Nonlinear Methods for Decoding Selective Attention to Speech From Ear-EEG Recordings
title_full_unstemmed Comparison of Linear and Nonlinear Methods for Decoding Selective Attention to Speech From Ear-EEG Recordings
title_short Comparison of Linear and Nonlinear Methods for Decoding Selective Attention to Speech From Ear-EEG Recordings
title_sort comparison of linear and nonlinear methods for decoding selective attention to speech from ear eeg recordings
topic Auditory attention decoding
EEG
hearables
url https://ieeexplore.ieee.org/document/11084763/
work_keys_str_mv AT mikedthornton comparisonoflinearandnonlinearmethodsfordecodingselectiveattentiontospeechfromeareegrecordings
AT danilopmandic comparisonoflinearandnonlinearmethodsfordecodingselectiveattentiontospeechfromeareegrecordings
AT tobiasreichenbach comparisonoflinearandnonlinearmethodsfordecodingselectiveattentiontospeechfromeareegrecordings