Prediction of Voice Therapy Outcomes Using Machine Learning Approaches and SHAP Analysis: A K-VRQOL-Based Analysis

This study aims to identify personal, clinical, and acoustic predictors of therapy outcomes based on changes in Korean voice-related quality of life (K-VRQOL) scores, as well as to compare the predictive performance of traditional regression and machine learning models. A total of 102 participants u...

Full description

Saved in:
Bibliographic Details
Main Authors: Ji Hye Park, Ah Ra Jung, Ji-Na Lee, Ji-Yeoun Lee
Format: Article
Language:English
Published: MDPI AG 2025-06-01
Series:Applied Sciences
Subjects:
Online Access:https://www.mdpi.com/2076-3417/15/13/7045
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This study aims to identify personal, clinical, and acoustic predictors of therapy outcomes based on changes in Korean voice-related quality of life (K-VRQOL) scores, as well as to compare the predictive performance of traditional regression and machine learning models. A total of 102 participants undergoing voice therapy are retrospectively analyzed. Multiple regression analysis and four machine learning algorithms—random forest (RF), gradient boosting (GB), light gradient boosting machine (LightGBM), and extreme gradient boosting (XGBoost)—are applied to predict changes in K-VRQOL scores across the total, physical, and emotional domains. The Shapley additive explanations (SHAP) approach is used to evaluate the relative contribution of each variable to the prediction outcomes. Female gender and comorbidity status emerge as significant predictors in both the total and physical domains. Among the acoustic features, jitter, SFF, and MPT are closely associated with improvements in physical voice function. LightGBM demonstrates the best overall performance, particularly in the total domain (R<sup>2</sup> = 32.54%), while GB excels in the physical domain. The emotional domain shows relatively low predictive power across the models. SHAP analysis reveals interpretable patterns, highlighting jitter and speaking fundamental frequency (SFF) as key contributors in high-performing models. Integrating statistical and machine learning approaches provides a robust framework for predicting and interpreting voice therapy outcomes. These findings support the use of explainable artificial intelligence (AI) to enhance clinical decision-making and pave the way for personalized voice rehabilitation strategies.
ISSN:2076-3417