Evaluating mental health apps under uncertainty: a decision-support framework integrating linguistic distributions, disappointment theory, and double normalization-based multi-aggregation
Abstract Despite the proliferation of mental health apps (MHAs), their evaluation remains challenging due to inconsistent clinical validity, poor long-term engagement, and limited consideration of expert disagreement. Existing evaluations often use single-point scores or star ratings for decision-ma...
Saved in:
| Main Authors: | , , , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Springer Nature
2025-08-01
|
| Series: | Humanities & Social Sciences Communications |
| Online Access: | https://doi.org/10.1057/s41599-025-05625-x |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Abstract Despite the proliferation of mental health apps (MHAs), their evaluation remains challenging due to inconsistent clinical validity, poor long-term engagement, and limited consideration of expert disagreement. Existing evaluations often use single-point scores or star ratings for decision-makers’ assessments, overlooking the uncertainty, variability, and emotional factors that shape decision-making. Additionally, many mHealth studies employ multi-attribute group decision-making (MAGDM) models that rely on a single normalization technique, which can introduce scale-related bias. To address these gaps, this study proposes a behavioral MAGDM framework, LDA–DT–DNMA, that integrates three key components: linguistic distribution assessment term sets (LDATS) to explicitly capture uncertainty and disagreement in decision-makers’ linguistic evaluations, disappointment theory (DT) to model emotional reactions such as disappointment and elation, and the double normalization-based multi-aggregation (DNMA) method to accommodate diverse evaluation attributes through dual normalization and multiple aggregation strategies. We also introduce the LDA sine entropy-based weight assignment (LDASEWA) method to derive attributes’ weights. When applied to an empirical evaluation of MHAs, the model demonstrates strong robustness and superior performance compared to existing approaches. This study contributes a psychologically realistic, uncertainty-aware decision-support framework for evaluating MHAs and other complex technologies. |
|---|---|
| ISSN: | 2662-9992 |