Generating Explanations for Autonomous Robots: A Systematic Review
Building trust between humans and robots has long interested the robotics community. Various studies have aimed to clarify the factors that influence the development of user trust. In Human-Robot Interaction (HRI) environments, a critical aspect of trust development is the robot’s ability...
Saved in:
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2025-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/10855405/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1832575585714438144 |
---|---|
author | David Sobrin-Hidalgo Angel Manuel Guerrero-Higueras Vicente Matellan-Olivera |
author_facet | David Sobrin-Hidalgo Angel Manuel Guerrero-Higueras Vicente Matellan-Olivera |
author_sort | David Sobrin-Hidalgo |
collection | DOAJ |
description | Building trust between humans and robots has long interested the robotics community. Various studies have aimed to clarify the factors that influence the development of user trust. In Human-Robot Interaction (HRI) environments, a critical aspect of trust development is the robot’s ability to make its behavior understandable. The concept of an eXplainable Autonomous Robot (XAR) addresses this requirement. However, giving a robot self-explanatory abilities is a complex task. Robot behavior includes multiple skills and diverse subsystems. This complexity led to research into a wide range of methods for generating explanations about robot behavior. This paper presents a systematic literature review that analyzes existing strategies for generating explanations in robots and studies the current XAR trends. Results indicate promising advancements in explainability systems. However, these systems are still unable to fully cover the complex behavior of autonomous robots. Furthermore, we also identify a lack of consensus on the theoretical concept of explainability, and the need for a robust methodology to assess explainability methods and tools has been identified. |
format | Article |
id | doaj-art-606df4515fe7472dabacde9a047ded46 |
institution | Kabale University |
issn | 2169-3536 |
language | English |
publishDate | 2025-01-01 |
publisher | IEEE |
record_format | Article |
series | IEEE Access |
spelling | doaj-art-606df4515fe7472dabacde9a047ded462025-01-31T23:04:32ZengIEEEIEEE Access2169-35362025-01-0113204132042610.1109/ACCESS.2025.353509710855405Generating Explanations for Autonomous Robots: A Systematic ReviewDavid Sobrin-Hidalgo0https://orcid.org/0009-0005-7673-5921Angel Manuel Guerrero-Higueras1https://orcid.org/0000-0001-8277-0700Vicente Matellan-Olivera2https://orcid.org/0000-0001-7844-9658Robotics Group, University of León, Campus de Vegazana, León, SpainRobotics Group, University of León, Campus de Vegazana, León, SpainRobotics Group, University of León, Campus de Vegazana, León, SpainBuilding trust between humans and robots has long interested the robotics community. Various studies have aimed to clarify the factors that influence the development of user trust. In Human-Robot Interaction (HRI) environments, a critical aspect of trust development is the robot’s ability to make its behavior understandable. The concept of an eXplainable Autonomous Robot (XAR) addresses this requirement. However, giving a robot self-explanatory abilities is a complex task. Robot behavior includes multiple skills and diverse subsystems. This complexity led to research into a wide range of methods for generating explanations about robot behavior. This paper presents a systematic literature review that analyzes existing strategies for generating explanations in robots and studies the current XAR trends. Results indicate promising advancements in explainability systems. However, these systems are still unable to fully cover the complex behavior of autonomous robots. Furthermore, we also identify a lack of consensus on the theoretical concept of explainability, and the need for a robust methodology to assess explainability methods and tools has been identified.https://ieeexplore.ieee.org/document/10855405/ExplainabilityeXplainable autonomous robothuman-robot interactionliterature reviewroboticssurvey |
spellingShingle | David Sobrin-Hidalgo Angel Manuel Guerrero-Higueras Vicente Matellan-Olivera Generating Explanations for Autonomous Robots: A Systematic Review IEEE Access Explainability eXplainable autonomous robot human-robot interaction literature review robotics survey |
title | Generating Explanations for Autonomous Robots: A Systematic Review |
title_full | Generating Explanations for Autonomous Robots: A Systematic Review |
title_fullStr | Generating Explanations for Autonomous Robots: A Systematic Review |
title_full_unstemmed | Generating Explanations for Autonomous Robots: A Systematic Review |
title_short | Generating Explanations for Autonomous Robots: A Systematic Review |
title_sort | generating explanations for autonomous robots a systematic review |
topic | Explainability eXplainable autonomous robot human-robot interaction literature review robotics survey |
url | https://ieeexplore.ieee.org/document/10855405/ |
work_keys_str_mv | AT davidsobrinhidalgo generatingexplanationsforautonomousrobotsasystematicreview AT angelmanuelguerrerohigueras generatingexplanationsforautonomousrobotsasystematicreview AT vicentematellanolivera generatingexplanationsforautonomousrobotsasystematicreview |