Learning the Car-following Behavior of Drivers Using Maximum Entropy Deep Inverse Reinforcement Learning

The present study proposes a framework for learning the car-following behavior of drivers based on maximum entropy deep inverse reinforcement learning. The proposed framework enables learning the reward function, which is represented by a fully connected neural network, from driving data, including...

Full description

Saved in:
Bibliographic Details
Main Authors: Yang Zhou, Rui Fu, Chang Wang
Format: Article
Language:English
Published: Wiley 2020-01-01
Series:Journal of Advanced Transportation
Online Access:http://dx.doi.org/10.1155/2020/4752651
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1832550915244032000
author Yang Zhou
Rui Fu
Chang Wang
author_facet Yang Zhou
Rui Fu
Chang Wang
author_sort Yang Zhou
collection DOAJ
description The present study proposes a framework for learning the car-following behavior of drivers based on maximum entropy deep inverse reinforcement learning. The proposed framework enables learning the reward function, which is represented by a fully connected neural network, from driving data, including the speed of the driver’s vehicle, the distance to the leading vehicle, and the relative speed. Data from two field tests with 42 drivers are used. After clustering the participants into aggressive and conservative groups, the car-following data were used to train the proposed model, a fully connected neural network model, and a recurrent neural network model. Adopting the fivefold cross-validation method, the proposed model was proved to have the lowest root mean squared percentage error and modified Hausdorff distance among the different models, exhibiting superior ability for reproducing drivers’ car-following behaviors. Moreover, the proposed model captured the characteristics of different driving styles during car-following scenarios. The learned rewards and strategies were consistent with the demonstrations of the two groups. Inverse reinforcement learning can serve as a new tool to explain and model driving behavior, providing references for the development of human-like autonomous driving models.
format Article
id doaj-art-7e0a162634634afd875e38ba3e368d27
institution Kabale University
issn 0197-6729
2042-3195
language English
publishDate 2020-01-01
publisher Wiley
record_format Article
series Journal of Advanced Transportation
spelling doaj-art-7e0a162634634afd875e38ba3e368d272025-02-03T06:05:30ZengWileyJournal of Advanced Transportation0197-67292042-31952020-01-01202010.1155/2020/47526514752651Learning the Car-following Behavior of Drivers Using Maximum Entropy Deep Inverse Reinforcement LearningYang Zhou0Rui Fu1Chang Wang2School of Automobile, Chang’an University, Middle Section of Nan Erhuan Road, Xi’an 710064, ChinaSchool of Automobile, Chang’an University, Middle Section of Nan Erhuan Road, Xi’an 710064, ChinaSchool of Automobile, Chang’an University, Middle Section of Nan Erhuan Road, Xi’an 710064, ChinaThe present study proposes a framework for learning the car-following behavior of drivers based on maximum entropy deep inverse reinforcement learning. The proposed framework enables learning the reward function, which is represented by a fully connected neural network, from driving data, including the speed of the driver’s vehicle, the distance to the leading vehicle, and the relative speed. Data from two field tests with 42 drivers are used. After clustering the participants into aggressive and conservative groups, the car-following data were used to train the proposed model, a fully connected neural network model, and a recurrent neural network model. Adopting the fivefold cross-validation method, the proposed model was proved to have the lowest root mean squared percentage error and modified Hausdorff distance among the different models, exhibiting superior ability for reproducing drivers’ car-following behaviors. Moreover, the proposed model captured the characteristics of different driving styles during car-following scenarios. The learned rewards and strategies were consistent with the demonstrations of the two groups. Inverse reinforcement learning can serve as a new tool to explain and model driving behavior, providing references for the development of human-like autonomous driving models.http://dx.doi.org/10.1155/2020/4752651
spellingShingle Yang Zhou
Rui Fu
Chang Wang
Learning the Car-following Behavior of Drivers Using Maximum Entropy Deep Inverse Reinforcement Learning
Journal of Advanced Transportation
title Learning the Car-following Behavior of Drivers Using Maximum Entropy Deep Inverse Reinforcement Learning
title_full Learning the Car-following Behavior of Drivers Using Maximum Entropy Deep Inverse Reinforcement Learning
title_fullStr Learning the Car-following Behavior of Drivers Using Maximum Entropy Deep Inverse Reinforcement Learning
title_full_unstemmed Learning the Car-following Behavior of Drivers Using Maximum Entropy Deep Inverse Reinforcement Learning
title_short Learning the Car-following Behavior of Drivers Using Maximum Entropy Deep Inverse Reinforcement Learning
title_sort learning the car following behavior of drivers using maximum entropy deep inverse reinforcement learning
url http://dx.doi.org/10.1155/2020/4752651
work_keys_str_mv AT yangzhou learningthecarfollowingbehaviorofdriversusingmaximumentropydeepinversereinforcementlearning
AT ruifu learningthecarfollowingbehaviorofdriversusingmaximumentropydeepinversereinforcementlearning
AT changwang learningthecarfollowingbehaviorofdriversusingmaximumentropydeepinversereinforcementlearning