ViE-Take: A Vision-Driven Multi-Modal Dataset for Exploring the Emotional Landscape in Takeover Safety of Autonomous Driving

Takeover safety draws increasing attention in the intelligent transportation as the new energy vehicles with cutting-edge autopilot capabilities vigorously blossom on the road. Despite recent studies highlighting the importance of drivers’ emotions in takeover safety, the lack of emotion-aware takeo...

Full description

Saved in:
Bibliographic Details
Main Authors: Yantong Wang, Yu Gu, Tong Quan, Jiaoyun Yang, Mianxiong Dong, Ning An, Fuji Ren
Format: Article
Language:English
Published: American Association for the Advancement of Science (AAAS) 2025-01-01
Series:Research
Online Access:https://spj.science.org/doi/10.34133/research.0603
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Takeover safety draws increasing attention in the intelligent transportation as the new energy vehicles with cutting-edge autopilot capabilities vigorously blossom on the road. Despite recent studies highlighting the importance of drivers’ emotions in takeover safety, the lack of emotion-aware takeover datasets hinders further investigation, thereby constraining potential applications in this field. To this end, we introduce ViE-Take, the first Vision-driven (Vision is used since it constitutes the most cost-effective and user-friendly solution for commercial driver monitor systems) dataset for exploring the Emotional landscape in Takeovers of autonomous driving. ViE-Take enables a comprehensive exploration of the impact of emotions on drivers’ takeover performance through 3 key attributes: multi-source emotion elicitation, multi-modal driver data collection, and multi-dimensional emotion annotations. To aid the use of ViE-Take, we provide 4 deep models (corresponding to 4 prevalent learning strategies) for predicting 3 different aspects of drivers’ takeover performance (readiness, reaction time, and quality). These models offer benefits for various downstream tasks, such as driver emotion recognition and regulation for automobile manufacturers. Initial analysis and experiments conducted on ViE-Take indicate that (a) emotions have diverse impacts on takeover performance, some of which are counterintuitive; (b) highly expressive social media clips, despite their brevity, prove effective in eliciting emotions (a foundation for emotion regulation); and (c) predicting takeover performance solely through deep learning on vision data not only is feasible but also holds great potential.
ISSN:2639-5274