Spatiotemporal decoupling attention transformer for 3D skeleton-based driver action recognition

Abstract Driver action recognition is crucial for in-vehicle safety. We argue that the following factors limit the related research. First, spatial constraints and obstructions in the vehicle restrict the range of motion, resulting in similar action patterns and difficulty collecting the full body p...

Full description

Saved in:
Bibliographic Details
Main Authors: Zhuoyan Xu, Jingke Xu
Format: Article
Language:English
Published: Springer 2025-02-01
Series:Complex & Intelligent Systems
Subjects:
Online Access:https://doi.org/10.1007/s40747-025-01811-1
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract Driver action recognition is crucial for in-vehicle safety. We argue that the following factors limit the related research. First, spatial constraints and obstructions in the vehicle restrict the range of motion, resulting in similar action patterns and difficulty collecting the full body posture. Second, in skeleton-based action recognition, establishing the joint dependencies by the self-attention computation is always limited to a single frame, ignoring the effect of body spatial structure on dependence weights and inter-frame. Common convolution in temporal flow only focuses on frame-level temporal features, ignoring motion pattern features at a higher semantic level. Our work proposed a novel spatiotemporal decoupling attention transformer (SDA-TR). The SDA module uses a spatiotemporal decoupling strategy to decouple the weight computation according to body structure and directly establish joint dependencies between multiple frames. The TFA module aggregates sub-action-level and frame-level temporal features to improve similar recognition accuracy. On the Driver Action Recognition dataset Drive&Act using driver upper body skeletons, SDA-TR achieves state-of-the-art performance. SDA-TR also achieved 92.2%/95.8% accuracy under the CS/CV benchmarks of NTU RGB+D 60, 88.6%/89.8% accuracy under the CS/CSet benchmarks of NTU RGB+D 120, on par with other state-of-the-art methods. Our method demonstrates great scalability and generalization for action recognition.
ISSN:2199-4536
2198-6053