Engineering Responsible And Explainable Models In Human-Agent Collectives

In human-agent collectives, humans and agents need to work collaboratively and agree on collective decisions. However, ensuring that agents responsibly make decisions is a complex task, especially when encountering dilemmas where the choices available to agents are not unambiguously preferred over a...

Full description

Saved in:
Bibliographic Details
Main Authors: Dhaminda B. Abeywickrama, Sarvapali D. Ramchurn
Format: Article
Language:English
Published: Taylor & Francis Group 2024-12-01
Series:Applied Artificial Intelligence
Online Access:https://www.tandfonline.com/doi/10.1080/08839514.2023.2282834
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In human-agent collectives, humans and agents need to work collaboratively and agree on collective decisions. However, ensuring that agents responsibly make decisions is a complex task, especially when encountering dilemmas where the choices available to agents are not unambiguously preferred over another. Therefore, methodologies that allow the certification of such systems are urgently needed. In this paper, we propose a novel engineering methodology based on formal model checking as a step toward providing evidence for the certification of responsible and explainable decision making within human-agent collectives. Our approach, which is based on the MCMAS model checker, verifies the decision-making behavior against the logical formulae specified to guarantee safety and controllability, and address ethical concerns. We propose the use of counterexample traces and simulation results to provide a judgment and an explanation to the AI engineer as to the reasons actions may be refused or allowed. To demonstrate the practical feasibility of our approach, we evaluate it using the real-world problem of human-UAV (unmanned aerial vehicle) teaming in dynamic and uncertain environments.
ISSN:0883-9514
1087-6545