Parallel Local and Global Context Modeling of Deep Learning-Based Monaural Speech Source Separation Techniques

The novel deep learning-based time domain single channel speech source separation methods have shown remarkable progress. Recent studies achieve either successful global or local context modeling for monaural speaker separation. Existing CNN-based methods perform local context modeling, and RNN-base...

Full description

Saved in:
Bibliographic Details
Main Authors: Swati Soni, Lalita Gupta, Rishav Dubey
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10969763/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The novel deep learning-based time domain single channel speech source separation methods have shown remarkable progress. Recent studies achieve either successful global or local context modeling for monaural speaker separation. Existing CNN-based methods perform local context modeling, and RNN-based or attention-based methods work on the global context of the speech signal. In this paper, we proposed two models which parallelly combine CNN-RNN-based and CNN-attention-based separation modules and perform parallel local and global context modeling. Our models keep maximum global or local context value at a particular time step. These values help our models to separate the speaker signals more accurately. We have conducted the experiments on Libri2mix and Libri3mix datasets. The experimental data demonstrates that our proposed models have outperformed the state-of-the-art methods. Our proposed models remarkably improve SDR and SI-SDR values on Libri2mix and Libri3mix datasets. The proposed parallel CNN-RNN-based and CNN-attention-based separation models achieve average SDR improvement of 2.10 dB and 2.21 dB, respectively, and SI-SDR improvement of 2.74 dB and 2.78 dB, respectively, on the Libri2mix dataset. However, on the Libri3mix dataset, the proposed models achieve 0.57 dB and 0.87 dB average SDR improvement for parallel CNN-RNN-based separation module, and 0.88 dB and 1.4 dB average SI-SDR improvement for CNN-attention-based separation models. Our work indirectly contributes to SDG Goal 10 (Reduced Inequalities) by improving communication tools for diverse linguistic communities. Furthermore, this technology aids SDG Goal 9 (Industry, Innovation, and Infrastructure) by advancing AI-powered assistive technologies, fostering innovation, and building resilient communication systems.
ISSN:2169-3536