The impact of labeling automotive AI as trustworthy or reliable on user evaluation and technology acceptance
Abstract This study explores whether labeling AI as either “trustworthy” or “reliable” influences user perceptions and acceptance of automotive AI technologies. Utilizing a one-way between-subjects design, the research presented online participants (N = 478) with a text presenting guidelines for eit...
Saved in:
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
Nature Portfolio
2025-01-01
|
Series: | Scientific Reports |
Subjects: | |
Online Access: | https://doi.org/10.1038/s41598-025-85558-2 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Abstract This study explores whether labeling AI as either “trustworthy” or “reliable” influences user perceptions and acceptance of automotive AI technologies. Utilizing a one-way between-subjects design, the research presented online participants (N = 478) with a text presenting guidelines for either trustworthy or reliable AI, before asking them to evaluate 3 vignette scenarios and fill in a modified version of the Technology Acceptance Model which covers different variables, such as perceived ease of use, human-like trust, and overall attitude. While labeling AI as “trustworthy” did not significantly influence people’s judgements on specific scenarios, it increased perceived ease of use and human-like trust, namely benevolence, suggesting a facilitating influence on usability and an anthropomorphic effect on user perceptions. The study provides insights into how specific labels affect adopting certain perceptions of AI technology. |
---|---|
ISSN: | 2045-2322 |