Fully Interpretable and Adjustable Model for Depression Diagnosis: A Qualitative Approach
Recent advances in machine learning (ML) have enabled AI applications in mental disorder diagnosis, but many methods remain black-box or rely on post-hoc explanations which are not straightforward or actionable for mental health practitioners. Meanwhile, interpretable methods, such as k-nearest nei...
Saved in:
| Main Authors: | , , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
LibraryPress@UF
2025-05-01
|
| Series: | Proceedings of the International Florida Artificial Intelligence Research Society Conference |
| Subjects: | |
| Online Access: | https://journals.flvc.org/FLAIRS/article/view/138733 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1850138037829238784 |
|---|---|
| author | Kuo Deng Xiaomeng Ye Kun Wang Angelina Pennino Abigail Jarvis Yola Hall |
| author_facet | Kuo Deng Xiaomeng Ye Kun Wang Angelina Pennino Abigail Jarvis Yola Hall |
| author_sort | Kuo Deng |
| collection | DOAJ |
| description |
Recent advances in machine learning (ML) have enabled AI applications in mental disorder diagnosis, but many methods remain black-box or rely on post-hoc explanations which are not straightforward or actionable for mental health practitioners. Meanwhile, interpretable methods, such as k-nearest neighbors (k-NN) classification, struggle with complex or high-dimensional data. Moreover, there is a lack of study on users' real experience with interpretable AI. This study demonstrates a network-based k-NN model (NN-kNN) that combines the interpretability with the predictive power of neural networks. The model prediction can be fully explained in terms of activated features and neighboring cases. We experimented with the model to predict the risks of depression and interviewed practitioners in a qualitative study. The feedback of the practitioners emphasized the model's adaptability, integration of clinical expertise, and transparency in the diagnostic process, highlighting its potential to ethically improve the diagnostic precision and confidence of the practitioner.
|
| format | Article |
| id | doaj-art-3054e1d79a0f4ddcb0da2feaf659b805 |
| institution | OA Journals |
| issn | 2334-0754 2334-0762 |
| language | English |
| publishDate | 2025-05-01 |
| publisher | LibraryPress@UF |
| record_format | Article |
| series | Proceedings of the International Florida Artificial Intelligence Research Society Conference |
| spelling | doaj-art-3054e1d79a0f4ddcb0da2feaf659b8052025-08-20T02:30:39ZengLibraryPress@UFProceedings of the International Florida Artificial Intelligence Research Society Conference2334-07542334-07622025-05-0138110.32473/flairs.38.1.138733Fully Interpretable and Adjustable Model for Depression Diagnosis: A Qualitative ApproachKuo Deng0Xiaomeng Ye1Kun Wang2Angelina Pennino3Abigail Jarvis4Yola Hall5Berry CollegeBerry CollegeThe University of IowaBerry CollegeBerry CollegeBerry College Recent advances in machine learning (ML) have enabled AI applications in mental disorder diagnosis, but many methods remain black-box or rely on post-hoc explanations which are not straightforward or actionable for mental health practitioners. Meanwhile, interpretable methods, such as k-nearest neighbors (k-NN) classification, struggle with complex or high-dimensional data. Moreover, there is a lack of study on users' real experience with interpretable AI. This study demonstrates a network-based k-NN model (NN-kNN) that combines the interpretability with the predictive power of neural networks. The model prediction can be fully explained in terms of activated features and neighboring cases. We experimented with the model to predict the risks of depression and interviewed practitioners in a qualitative study. The feedback of the practitioners emphasized the model's adaptability, integration of clinical expertise, and transparency in the diagnostic process, highlighting its potential to ethically improve the diagnostic precision and confidence of the practitioner. https://journals.flvc.org/FLAIRS/article/view/138733explainable AIAI in healthcaremental healthinterpretable AI |
| spellingShingle | Kuo Deng Xiaomeng Ye Kun Wang Angelina Pennino Abigail Jarvis Yola Hall Fully Interpretable and Adjustable Model for Depression Diagnosis: A Qualitative Approach Proceedings of the International Florida Artificial Intelligence Research Society Conference explainable AI AI in healthcare mental health interpretable AI |
| title | Fully Interpretable and Adjustable Model for Depression Diagnosis: A Qualitative Approach |
| title_full | Fully Interpretable and Adjustable Model for Depression Diagnosis: A Qualitative Approach |
| title_fullStr | Fully Interpretable and Adjustable Model for Depression Diagnosis: A Qualitative Approach |
| title_full_unstemmed | Fully Interpretable and Adjustable Model for Depression Diagnosis: A Qualitative Approach |
| title_short | Fully Interpretable and Adjustable Model for Depression Diagnosis: A Qualitative Approach |
| title_sort | fully interpretable and adjustable model for depression diagnosis a qualitative approach |
| topic | explainable AI AI in healthcare mental health interpretable AI |
| url | https://journals.flvc.org/FLAIRS/article/view/138733 |
| work_keys_str_mv | AT kuodeng fullyinterpretableandadjustablemodelfordepressiondiagnosisaqualitativeapproach AT xiaomengye fullyinterpretableandadjustablemodelfordepressiondiagnosisaqualitativeapproach AT kunwang fullyinterpretableandadjustablemodelfordepressiondiagnosisaqualitativeapproach AT angelinapennino fullyinterpretableandadjustablemodelfordepressiondiagnosisaqualitativeapproach AT abigailjarvis fullyinterpretableandadjustablemodelfordepressiondiagnosisaqualitativeapproach AT yolahall fullyinterpretableandadjustablemodelfordepressiondiagnosisaqualitativeapproach |