Energy Urban Domain: Personalized Evaluation of Expert and Non-Expert Stakeholder Interaction With Artificial Intelligence Through ChatGPT Using the VIRTSI Model
This paper explores the application of Generative AI, specifically ChatGPT, in urban energy management, assessing human users’ trust in AI systems that, while often accurate, can still make mistakes. It examines how different stakeholder groups—AI experts, energy domain experts...
Saved in:
| Main Authors: | , , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2025-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/10949140/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | This paper explores the application of Generative AI, specifically ChatGPT, in urban energy management, assessing human users’ trust in AI systems that, while often accurate, can still make mistakes. It examines how different stakeholder groups—AI experts, energy domain experts, and non-experts—develop trust, distrust, or overtrust in AI-generated outputs and highlights the risks associated with these trust states. While overtrust can lead to blind reliance on incorrect AI outputs, distrust can result in the unnecessary rejection of accurate AI recommendations, ultimately reducing the effectiveness of AI-assisted decision-making. Using the VIRTSI (Variability and Impact of Reciprocal Trust States towards Intelligent Systems) methodology, this research monitors human-AI trust evolution through Deterministic Finite Automata (DFA) and quantifies trust behaviors using user-adapted Confusion Matrices, addressing a critical gap in AI trust dynamics that traditional acceptance models overlook. The findings validate VIRTSI’s ability to track trust transitions. The study reveals that AI experts exhibit skepticism due to their awareness of AI’s limitations, energy experts tend to overtrust AI, likely influenced by its confident and seemingly reliable responses, and non-experts display inconsistent trust, highlighting decision-making challenges. These findings confirm VIRTSI’s premise that trust in AI is dynamic, varies by user expertise, and must be continuously monitored and assessed. Ultimately, this study strengthens VIRTSI as a necessary framework for assessing and optimizing trust in AI-driven sustainability solutions, ensuring that AI systems are not only trusted but also used effectively and responsibly in energy applications. Unlike other models of technology acceptance that focus solely on adoption, VIRTSI provides a continuous and quantifiable approach to trust calibration, identifying harmful trust patterns and guiding improvements in AI-human interaction over time. |
|---|---|
| ISSN: | 2169-3536 |