Fully Interpretable and Adjustable Model for Depression Diagnosis: A Qualitative Approach
Recent advances in machine learning (ML) have enabled AI applications in mental disorder diagnosis, but many methods remain black-box or rely on post-hoc explanations which are not straightforward or actionable for mental health practitioners. Meanwhile, interpretable methods, such as k-nearest nei...
Saved in:
| Main Authors: | Kuo Deng, Xiaomeng Ye, Kun Wang, Angelina Pennino, Abigail Jarvis, Yola Hall |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
LibraryPress@UF
2025-05-01
|
| Series: | Proceedings of the International Florida Artificial Intelligence Research Society Conference |
| Subjects: | |
| Online Access: | https://journals.flvc.org/FLAIRS/article/view/138733 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Explainability and Interpretability in Concept and Data Drift: A Systematic Literature Review
by: Daniele Pelosi, et al.
Published: (2025-07-01) -
Exploring the Landscape of Explainable Artificial Intelligence (XAI): A Systematic Review of Techniques and Applications
by: Sayda Umma Hamida, et al.
Published: (2024-10-01) -
Explainable AI in medicine: challenges of integrating XAI into the future clinical routine
by: Tim Räz, et al.
Published: (2025-08-01) -
A Systematic Literature Review on AI Safety: Identifying Trends, Challenges, and Future Directions
by: Wissam Salhab, et al.
Published: (2024-01-01) -
AI anxiety: Explication and exploration of effect on state anxiety when interacting with AI doctors
by: Hyun Yang, et al.
Published: (2025-03-01)