ViE-Take: A Vision-Driven Multi-Modal Dataset for Exploring the Emotional Landscape in Takeover Safety of Autonomous Driving
Takeover safety draws increasing attention in the intelligent transportation as the new energy vehicles with cutting-edge autopilot capabilities vigorously blossom on the road. Despite recent studies highlighting the importance of drivers’ emotions in takeover safety, the lack of emotion-aware takeo...
Saved in:
| Main Authors: | , , , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
American Association for the Advancement of Science (AAAS)
2025-01-01
|
| Series: | Research |
| Online Access: | https://spj.science.org/doi/10.34133/research.0603 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1850255646362959872 |
|---|---|
| author | Yantong Wang Yu Gu Tong Quan Jiaoyun Yang Mianxiong Dong Ning An Fuji Ren |
| author_facet | Yantong Wang Yu Gu Tong Quan Jiaoyun Yang Mianxiong Dong Ning An Fuji Ren |
| author_sort | Yantong Wang |
| collection | DOAJ |
| description | Takeover safety draws increasing attention in the intelligent transportation as the new energy vehicles with cutting-edge autopilot capabilities vigorously blossom on the road. Despite recent studies highlighting the importance of drivers’ emotions in takeover safety, the lack of emotion-aware takeover datasets hinders further investigation, thereby constraining potential applications in this field. To this end, we introduce ViE-Take, the first Vision-driven (Vision is used since it constitutes the most cost-effective and user-friendly solution for commercial driver monitor systems) dataset for exploring the Emotional landscape in Takeovers of autonomous driving. ViE-Take enables a comprehensive exploration of the impact of emotions on drivers’ takeover performance through 3 key attributes: multi-source emotion elicitation, multi-modal driver data collection, and multi-dimensional emotion annotations. To aid the use of ViE-Take, we provide 4 deep models (corresponding to 4 prevalent learning strategies) for predicting 3 different aspects of drivers’ takeover performance (readiness, reaction time, and quality). These models offer benefits for various downstream tasks, such as driver emotion recognition and regulation for automobile manufacturers. Initial analysis and experiments conducted on ViE-Take indicate that (a) emotions have diverse impacts on takeover performance, some of which are counterintuitive; (b) highly expressive social media clips, despite their brevity, prove effective in eliciting emotions (a foundation for emotion regulation); and (c) predicting takeover performance solely through deep learning on vision data not only is feasible but also holds great potential. |
| format | Article |
| id | doaj-art-b8932df0b6254fc2acc3e5d38cdf52ab |
| institution | OA Journals |
| issn | 2639-5274 |
| language | English |
| publishDate | 2025-01-01 |
| publisher | American Association for the Advancement of Science (AAAS) |
| record_format | Article |
| series | Research |
| spelling | doaj-art-b8932df0b6254fc2acc3e5d38cdf52ab2025-08-20T01:56:49ZengAmerican Association for the Advancement of Science (AAAS)Research2639-52742025-01-01810.34133/research.0603ViE-Take: A Vision-Driven Multi-Modal Dataset for Exploring the Emotional Landscape in Takeover Safety of Autonomous DrivingYantong Wang0Yu Gu1Tong Quan2Jiaoyun Yang3Mianxiong Dong4Ning An5Fuji Ren6School of Biomedical Engineering, Anhui Medical University, Hefei, China.I+ Lab, School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China.Key Laboratory of Knowledge Engineering with Big Data of the Ministry of Education, Hefei University of Technology, Hefei, China.Key Laboratory of Knowledge Engineering with Big Data of the Ministry of Education, Hefei University of Technology, Hefei, China.Department of Sciences and Informatics, Muroran Institute of Technology, Hokkaido, Japan.Key Laboratory of Knowledge Engineering with Big Data of the Ministry of Education, Hefei University of Technology, Hefei, China.I+ Lab, School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China.Takeover safety draws increasing attention in the intelligent transportation as the new energy vehicles with cutting-edge autopilot capabilities vigorously blossom on the road. Despite recent studies highlighting the importance of drivers’ emotions in takeover safety, the lack of emotion-aware takeover datasets hinders further investigation, thereby constraining potential applications in this field. To this end, we introduce ViE-Take, the first Vision-driven (Vision is used since it constitutes the most cost-effective and user-friendly solution for commercial driver monitor systems) dataset for exploring the Emotional landscape in Takeovers of autonomous driving. ViE-Take enables a comprehensive exploration of the impact of emotions on drivers’ takeover performance through 3 key attributes: multi-source emotion elicitation, multi-modal driver data collection, and multi-dimensional emotion annotations. To aid the use of ViE-Take, we provide 4 deep models (corresponding to 4 prevalent learning strategies) for predicting 3 different aspects of drivers’ takeover performance (readiness, reaction time, and quality). These models offer benefits for various downstream tasks, such as driver emotion recognition and regulation for automobile manufacturers. Initial analysis and experiments conducted on ViE-Take indicate that (a) emotions have diverse impacts on takeover performance, some of which are counterintuitive; (b) highly expressive social media clips, despite their brevity, prove effective in eliciting emotions (a foundation for emotion regulation); and (c) predicting takeover performance solely through deep learning on vision data not only is feasible but also holds great potential.https://spj.science.org/doi/10.34133/research.0603 |
| spellingShingle | Yantong Wang Yu Gu Tong Quan Jiaoyun Yang Mianxiong Dong Ning An Fuji Ren ViE-Take: A Vision-Driven Multi-Modal Dataset for Exploring the Emotional Landscape in Takeover Safety of Autonomous Driving Research |
| title | ViE-Take: A Vision-Driven Multi-Modal Dataset for Exploring the Emotional Landscape in Takeover Safety of Autonomous Driving |
| title_full | ViE-Take: A Vision-Driven Multi-Modal Dataset for Exploring the Emotional Landscape in Takeover Safety of Autonomous Driving |
| title_fullStr | ViE-Take: A Vision-Driven Multi-Modal Dataset for Exploring the Emotional Landscape in Takeover Safety of Autonomous Driving |
| title_full_unstemmed | ViE-Take: A Vision-Driven Multi-Modal Dataset for Exploring the Emotional Landscape in Takeover Safety of Autonomous Driving |
| title_short | ViE-Take: A Vision-Driven Multi-Modal Dataset for Exploring the Emotional Landscape in Takeover Safety of Autonomous Driving |
| title_sort | vie take a vision driven multi modal dataset for exploring the emotional landscape in takeover safety of autonomous driving |
| url | https://spj.science.org/doi/10.34133/research.0603 |
| work_keys_str_mv | AT yantongwang vietakeavisiondrivenmultimodaldatasetforexploringtheemotionallandscapeintakeoversafetyofautonomousdriving AT yugu vietakeavisiondrivenmultimodaldatasetforexploringtheemotionallandscapeintakeoversafetyofautonomousdriving AT tongquan vietakeavisiondrivenmultimodaldatasetforexploringtheemotionallandscapeintakeoversafetyofautonomousdriving AT jiaoyunyang vietakeavisiondrivenmultimodaldatasetforexploringtheemotionallandscapeintakeoversafetyofautonomousdriving AT mianxiongdong vietakeavisiondrivenmultimodaldatasetforexploringtheemotionallandscapeintakeoversafetyofautonomousdriving AT ningan vietakeavisiondrivenmultimodaldatasetforexploringtheemotionallandscapeintakeoversafetyofautonomousdriving AT fujiren vietakeavisiondrivenmultimodaldatasetforexploringtheemotionallandscapeintakeoversafetyofautonomousdriving |