The singing style of female roles in ethnic opera under artificial intelligence and deep neural networks
Abstract With the rapid advancement of artificial intelligence technology, efficiently extracting and analyzing music performance style features has become an important topic in the field of music information processing. This work focuses on the classification of singing styles of female roles in et...
Saved in:
| Main Author: | |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Nature Portfolio
2025-06-01
|
| Series: | Scientific Reports |
| Subjects: | |
| Online Access: | https://doi.org/10.1038/s41598-025-05429-8 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1850109967767437312 |
|---|---|
| author | Huixia Yang |
| author_facet | Huixia Yang |
| author_sort | Huixia Yang |
| collection | DOAJ |
| description | Abstract With the rapid advancement of artificial intelligence technology, efficiently extracting and analyzing music performance style features has become an important topic in the field of music information processing. This work focuses on the classification of singing styles of female roles in ethnic opera and proposes an Attention-Enhanced 1D Residual Gated Convolutional and Bidirectional Recurrent Neural Network (ARGC-BRNN) model. The model uses a Residual Gated Linear Unit with Squeeze-and-Excitation (RGLU-SE) block to efficiently extract multi-level features of singing styles and combines a Bidirectional Recurrent Neural Network to model temporal dependencies. Finally, it uses an attention mechanism for global feature aggregation and classification. Experiments conducted on a self-constructed dataset of ethnic opera female role singing segments and the publicly available MagnaTagATune dataset show that the classification performance of the ARGC-BRNN model outperforms other comparison models. The model achieves an accuracy of 0.872 on the self-constructed dataset and an Area Under Curve of 0.912 on the MagnaTagATune dataset. The proposed model improves the results by 0.44% and 0.46%, respectively, compared to other models. The model also demonstrates significant advantages in training efficiency. The results indicate that the ARGC-BRNN model can effectively capture music singing style features, providing technical support for the digital and intelligent analysis of ethnic opera art. |
| format | Article |
| id | doaj-art-e35bdf4fa3db4b5a888c312d3be530f3 |
| institution | OA Journals |
| issn | 2045-2322 |
| language | English |
| publishDate | 2025-06-01 |
| publisher | Nature Portfolio |
| record_format | Article |
| series | Scientific Reports |
| spelling | doaj-art-e35bdf4fa3db4b5a888c312d3be530f32025-08-20T02:37:57ZengNature PortfolioScientific Reports2045-23222025-06-0115111810.1038/s41598-025-05429-8The singing style of female roles in ethnic opera under artificial intelligence and deep neural networksHuixia Yang0Xingzhi College, Zhejiang Normal UniversityAbstract With the rapid advancement of artificial intelligence technology, efficiently extracting and analyzing music performance style features has become an important topic in the field of music information processing. This work focuses on the classification of singing styles of female roles in ethnic opera and proposes an Attention-Enhanced 1D Residual Gated Convolutional and Bidirectional Recurrent Neural Network (ARGC-BRNN) model. The model uses a Residual Gated Linear Unit with Squeeze-and-Excitation (RGLU-SE) block to efficiently extract multi-level features of singing styles and combines a Bidirectional Recurrent Neural Network to model temporal dependencies. Finally, it uses an attention mechanism for global feature aggregation and classification. Experiments conducted on a self-constructed dataset of ethnic opera female role singing segments and the publicly available MagnaTagATune dataset show that the classification performance of the ARGC-BRNN model outperforms other comparison models. The model achieves an accuracy of 0.872 on the self-constructed dataset and an Area Under Curve of 0.912 on the MagnaTagATune dataset. The proposed model improves the results by 0.44% and 0.46%, respectively, compared to other models. The model also demonstrates significant advantages in training efficiency. The results indicate that the ARGC-BRNN model can effectively capture music singing style features, providing technical support for the digital and intelligent analysis of ethnic opera art.https://doi.org/10.1038/s41598-025-05429-8Singing style classificationEthnic operaResidual gated convolutionBidirectional recurrent neural networkAttention mechanism |
| spellingShingle | Huixia Yang The singing style of female roles in ethnic opera under artificial intelligence and deep neural networks Scientific Reports Singing style classification Ethnic opera Residual gated convolution Bidirectional recurrent neural network Attention mechanism |
| title | The singing style of female roles in ethnic opera under artificial intelligence and deep neural networks |
| title_full | The singing style of female roles in ethnic opera under artificial intelligence and deep neural networks |
| title_fullStr | The singing style of female roles in ethnic opera under artificial intelligence and deep neural networks |
| title_full_unstemmed | The singing style of female roles in ethnic opera under artificial intelligence and deep neural networks |
| title_short | The singing style of female roles in ethnic opera under artificial intelligence and deep neural networks |
| title_sort | singing style of female roles in ethnic opera under artificial intelligence and deep neural networks |
| topic | Singing style classification Ethnic opera Residual gated convolution Bidirectional recurrent neural network Attention mechanism |
| url | https://doi.org/10.1038/s41598-025-05429-8 |
| work_keys_str_mv | AT huixiayang thesingingstyleoffemalerolesinethnicoperaunderartificialintelligenceanddeepneuralnetworks AT huixiayang singingstyleoffemalerolesinethnicoperaunderartificialintelligenceanddeepneuralnetworks |