Measuring trust in artificial intelligence: validation of an established scale and its short form
An understanding of the nature and function of human trust in artificial intelligence (AI) is fundamental to the safe and effective integration of these technologies into organizational settings. The Trust in Automation Scale is a commonly used self-report measure of trust in automated systems; howe...
Saved in:
| Main Authors: | , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Frontiers Media S.A.
2025-05-01
|
| Series: | Frontiers in Artificial Intelligence |
| Subjects: | |
| Online Access: | https://www.frontiersin.org/articles/10.3389/frai.2025.1582880/full |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1850190500370317312 |
|---|---|
| author | Melanie J. McGrath Oliver Lack James Tisch Andreas Duenser |
| author_facet | Melanie J. McGrath Oliver Lack James Tisch Andreas Duenser |
| author_sort | Melanie J. McGrath |
| collection | DOAJ |
| description | An understanding of the nature and function of human trust in artificial intelligence (AI) is fundamental to the safe and effective integration of these technologies into organizational settings. The Trust in Automation Scale is a commonly used self-report measure of trust in automated systems; however, it has not yet been subjected to comprehensive psychometric validation. Across two studies, we tested the capacity of the scale to effectively measure trust across a range of AI applications. Results indicate that the Trust in Automation Scale is a valid and reliable measure of human trust in AI; however, with 12 items, it is often impractical for contexts requiring frequent and minimally disruptive measurements. To address this limitation, we developed and validated a three-item version of the TIAS, the Short Trust in Automation Scale (S-TIAS). In two further studies, we tested the sensitivity of the S-TIAS to manipulations of the trustworthiness of an AI system, as well as the convergent validity of the scale and its capacity to predict intentions to rely on AI-generated recommendations. In both studies, the S-TIAS also demonstrated convergent validity and significantly predicted intentions to rely on the AI system in patterns similar to the TIAS. This suggests that the S-TIAS is a practical and valid alternative for measuring trust in automation and AI for the purposes of identifying antecedent factors of trust and predicting trust outcomes. |
| format | Article |
| id | doaj-art-9a1e6e4798874e28bf1e9c1e2f36e5bd |
| institution | OA Journals |
| issn | 2624-8212 |
| language | English |
| publishDate | 2025-05-01 |
| publisher | Frontiers Media S.A. |
| record_format | Article |
| series | Frontiers in Artificial Intelligence |
| spelling | doaj-art-9a1e6e4798874e28bf1e9c1e2f36e5bd2025-08-20T02:15:16ZengFrontiers Media S.A.Frontiers in Artificial Intelligence2624-82122025-05-01810.3389/frai.2025.15828801582880Measuring trust in artificial intelligence: validation of an established scale and its short formMelanie J. McGrath0Oliver Lack1James Tisch2Andreas Duenser3Commonwealth Scientific and Industrial Research Organisation (CSIRO), Clayton, VIC, AustraliaSchool of Psychology & Australian Institute for Machine Learning, University of Adelaide, Adelaide, SA, AustraliaSchool of Psychological Sciences, University of Melbourne, Melbourne, VIC, AustraliaCommonwealth Scientific and Industrial Research Organisation (CSIRO), Sandy Bay, TAS, AustraliaAn understanding of the nature and function of human trust in artificial intelligence (AI) is fundamental to the safe and effective integration of these technologies into organizational settings. The Trust in Automation Scale is a commonly used self-report measure of trust in automated systems; however, it has not yet been subjected to comprehensive psychometric validation. Across two studies, we tested the capacity of the scale to effectively measure trust across a range of AI applications. Results indicate that the Trust in Automation Scale is a valid and reliable measure of human trust in AI; however, with 12 items, it is often impractical for contexts requiring frequent and minimally disruptive measurements. To address this limitation, we developed and validated a three-item version of the TIAS, the Short Trust in Automation Scale (S-TIAS). In two further studies, we tested the sensitivity of the S-TIAS to manipulations of the trustworthiness of an AI system, as well as the convergent validity of the scale and its capacity to predict intentions to rely on AI-generated recommendations. In both studies, the S-TIAS also demonstrated convergent validity and significantly predicted intentions to rely on the AI system in patterns similar to the TIAS. This suggests that the S-TIAS is a practical and valid alternative for measuring trust in automation and AI for the purposes of identifying antecedent factors of trust and predicting trust outcomes.https://www.frontiersin.org/articles/10.3389/frai.2025.1582880/fulltrustartificial intelligenceautomationhuman-AI teamingcollaborative intelligencepsychometrics |
| spellingShingle | Melanie J. McGrath Oliver Lack James Tisch Andreas Duenser Measuring trust in artificial intelligence: validation of an established scale and its short form Frontiers in Artificial Intelligence trust artificial intelligence automation human-AI teaming collaborative intelligence psychometrics |
| title | Measuring trust in artificial intelligence: validation of an established scale and its short form |
| title_full | Measuring trust in artificial intelligence: validation of an established scale and its short form |
| title_fullStr | Measuring trust in artificial intelligence: validation of an established scale and its short form |
| title_full_unstemmed | Measuring trust in artificial intelligence: validation of an established scale and its short form |
| title_short | Measuring trust in artificial intelligence: validation of an established scale and its short form |
| title_sort | measuring trust in artificial intelligence validation of an established scale and its short form |
| topic | trust artificial intelligence automation human-AI teaming collaborative intelligence psychometrics |
| url | https://www.frontiersin.org/articles/10.3389/frai.2025.1582880/full |
| work_keys_str_mv | AT melaniejmcgrath measuringtrustinartificialintelligencevalidationofanestablishedscaleanditsshortform AT oliverlack measuringtrustinartificialintelligencevalidationofanestablishedscaleanditsshortform AT jamestisch measuringtrustinartificialintelligencevalidationofanestablishedscaleanditsshortform AT andreasduenser measuringtrustinartificialintelligencevalidationofanestablishedscaleanditsshortform |