Self-assessment in machines boosts human Trust
Low trust in autonomous systems remains a significant barrier to adoption and performance. To effectively increase trust in these systems, machines must perform actions to calibrate human trust based on an accurate assessment of both their capability and human trust in real time. Existing efforts de...
Saved in:
| Main Authors: | Dana Warmsley, Krishna Choudhary , Jocelyn Rego , Emma Viani , Praveen K. Pilly |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Frontiers Media S.A.
2025-05-01
|
| Series: | Frontiers in Robotics and AI |
| Subjects: | |
| Online Access: | https://www.frontiersin.org/articles/10.3389/frobt.2025.1557075/full |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Effectiveness of Explainable Artificial Intelligence (XAI) Techniques for Improving Human Trust in Machine Learning Models: A Systematic Literature Review
by: In-On Wiratsin, et al.
Published: (2025-01-01) -
Survey on trusted cloud platform technology
by: Xinfeng HE, et al.
Published: (2019-02-01) -
Trusted auditing method of virtual machine based on improved expectation decision method
by: Junfeng TIAN, et al.
Published: (2018-06-01) -
Measuring trust in artificial intelligence: validation of an established scale and its short form
by: Melanie J. McGrath, et al.
Published: (2025-05-01) -
AI literacy and trust: A multi-method study of Human-GAI team collaboration
by: Zilong Pan, et al.
Published: (2025-05-01)