Is human-like decision making explainable? Towards an explainable artificial intelligence for autonomous vehicles
To achieve trustworthy human-like decisions for autonomous vehicles (AVs), this paper proposes a new explainable framework for personalized human-like driving intention analysis. In the first stage, we adopt a spectral clustering method for driving style characterization, and introduce a misclassifi...
Saved in:
Main Authors: | , , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Elsevier
2025-01-01
|
Series: | Transportation Research Interdisciplinary Perspectives |
Subjects: | |
Online Access: | http://www.sciencedirect.com/science/article/pii/S2590198224002641 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1823864290115846144 |
---|---|
author | Jiming Xie Yan Zhang Yaqin Qin Bijun Wang Shuai Dong Ke Li Yulan Xia |
author_facet | Jiming Xie Yan Zhang Yaqin Qin Bijun Wang Shuai Dong Ke Li Yulan Xia |
author_sort | Jiming Xie |
collection | DOAJ |
description | To achieve trustworthy human-like decisions for autonomous vehicles (AVs), this paper proposes a new explainable framework for personalized human-like driving intention analysis. In the first stage, we adopt a spectral clustering method for driving style characterization, and introduce a misclassification cost matrix to describe different driving needs. Based on the parallelism in the complex neural network of human brain, we construct a Width Human-like neural network (WNN) model for personalized cognitive and human-like driving intention decision making. In the second stage, we draw inspiration from the field of brain-like trusted AI to construct a robust, in-depth, and unbiased evaluation and interpretability framework involving three dimensions: Permutation Importance (PI) analysis, Partial Dependence Plot (PDP) analysis, and model complexity analysis. An empirical investigation using real driving trajectory data from Kunming, China, confirms the ability of our approach to predict potential driving decisions with high accuracy while providing the rationale implicit AV decisions. These findings have the potential to inform ongoing research on brain-like neural learning and could function as a catalyst for developing swifter and more potent algorithmic solutions in the realm of intelligent transportation. |
format | Article |
id | doaj-art-f57f3a3c9fad45b7bcede581ea7f048d |
institution | Kabale University |
issn | 2590-1982 |
language | English |
publishDate | 2025-01-01 |
publisher | Elsevier |
record_format | Article |
series | Transportation Research Interdisciplinary Perspectives |
spelling | doaj-art-f57f3a3c9fad45b7bcede581ea7f048d2025-02-09T05:01:10ZengElsevierTransportation Research Interdisciplinary Perspectives2590-19822025-01-0129101278Is human-like decision making explainable? Towards an explainable artificial intelligence for autonomous vehiclesJiming Xie0Yan Zhang1Yaqin Qin2Bijun Wang3Shuai Dong4Ke Li5Yulan Xia6Faculty of Transportation Engineering, Kunming University of Science and Technology, Kunming, ChinaFaculty of Transportation Engineering, Kunming University of Science and Technology, Kunming, China; School of Systems Science, Beijing Jiaotong University, Beijing, ChinaFaculty of Transportation Engineering, Kunming University of Science and Technology, Kunming, China; Corresponding authors.Faculty of Transportation Engineering, Kunming University of Science and Technology, Kunming, ChinaFaculty of Transportation Engineering, Kunming University of Science and Technology, Kunming, ChinaFaculty of Transportation Engineering, Kunming University of Science and Technology, Kunming, ChinaDepartment of Traffic Engineering, University of Shanghai for Science and Technology, Shanghai, China; Corresponding authors.To achieve trustworthy human-like decisions for autonomous vehicles (AVs), this paper proposes a new explainable framework for personalized human-like driving intention analysis. In the first stage, we adopt a spectral clustering method for driving style characterization, and introduce a misclassification cost matrix to describe different driving needs. Based on the parallelism in the complex neural network of human brain, we construct a Width Human-like neural network (WNN) model for personalized cognitive and human-like driving intention decision making. In the second stage, we draw inspiration from the field of brain-like trusted AI to construct a robust, in-depth, and unbiased evaluation and interpretability framework involving three dimensions: Permutation Importance (PI) analysis, Partial Dependence Plot (PDP) analysis, and model complexity analysis. An empirical investigation using real driving trajectory data from Kunming, China, confirms the ability of our approach to predict potential driving decisions with high accuracy while providing the rationale implicit AV decisions. These findings have the potential to inform ongoing research on brain-like neural learning and could function as a catalyst for developing swifter and more potent algorithmic solutions in the realm of intelligent transportation.http://www.sciencedirect.com/science/article/pii/S2590198224002641Human-like autonomous driving systemDecision makingInterpretabilityWidth human-like neural networkExplainable artificial intelligence |
spellingShingle | Jiming Xie Yan Zhang Yaqin Qin Bijun Wang Shuai Dong Ke Li Yulan Xia Is human-like decision making explainable? Towards an explainable artificial intelligence for autonomous vehicles Transportation Research Interdisciplinary Perspectives Human-like autonomous driving system Decision making Interpretability Width human-like neural network Explainable artificial intelligence |
title | Is human-like decision making explainable? Towards an explainable artificial intelligence for autonomous vehicles |
title_full | Is human-like decision making explainable? Towards an explainable artificial intelligence for autonomous vehicles |
title_fullStr | Is human-like decision making explainable? Towards an explainable artificial intelligence for autonomous vehicles |
title_full_unstemmed | Is human-like decision making explainable? Towards an explainable artificial intelligence for autonomous vehicles |
title_short | Is human-like decision making explainable? Towards an explainable artificial intelligence for autonomous vehicles |
title_sort | is human like decision making explainable towards an explainable artificial intelligence for autonomous vehicles |
topic | Human-like autonomous driving system Decision making Interpretability Width human-like neural network Explainable artificial intelligence |
url | http://www.sciencedirect.com/science/article/pii/S2590198224002641 |
work_keys_str_mv | AT jimingxie ishumanlikedecisionmakingexplainabletowardsanexplainableartificialintelligenceforautonomousvehicles AT yanzhang ishumanlikedecisionmakingexplainabletowardsanexplainableartificialintelligenceforautonomousvehicles AT yaqinqin ishumanlikedecisionmakingexplainabletowardsanexplainableartificialintelligenceforautonomousvehicles AT bijunwang ishumanlikedecisionmakingexplainabletowardsanexplainableartificialintelligenceforautonomousvehicles AT shuaidong ishumanlikedecisionmakingexplainabletowardsanexplainableartificialintelligenceforautonomousvehicles AT keli ishumanlikedecisionmakingexplainabletowardsanexplainableartificialintelligenceforautonomousvehicles AT yulanxia ishumanlikedecisionmakingexplainabletowardsanexplainableartificialintelligenceforautonomousvehicles |