Manet: motion-aware network for video action recognition

Abstract Video action recognition is a fundamental task in video understanding. Actions in videos may vary at different speeds or scales, and it is difficult to cope with a wide variety of actions by relying on a single spatio-temporal scale to extract features. To address this problem, we propose a...

Full description

Saved in:
Bibliographic Details
Main Authors: Xiaoyang Li, Wenzhu Yang, Kanglin Wang, Tiebiao Wang, Chen Zhang
Format: Article
Language:English
Published: Springer 2025-02-01
Series:Complex & Intelligent Systems
Subjects:
Online Access:https://doi.org/10.1007/s40747-024-01774-9
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract Video action recognition is a fundamental task in video understanding. Actions in videos may vary at different speeds or scales, and it is difficult to cope with a wide variety of actions by relying on a single spatio-temporal scale to extract features. To address this problem, we propose a Motion-Aware Network (MANet), which includes three key modules: (1) Local Motion Encoding Module (LMEM) for capturing local motion features, (2) Spatio-Temporal Excitation Module (STEM) for extracting multi-granular motion information, and (3) Multiple Temporal Aggregation Module (MTAM) for modeling multi-scale temporal information. The MANet, equipped with these modules, can capture multi-granularity spatio-temporal cues. We conducted extensive experiments on five mainstream datasets, Something-Something V1 & V2, Jester, Diving48, and UCF-101, to validate the effectiveness of MANet. The MANet achieves competitive performance on Something-Something V1 (52.5%), Something-Something V2 (63.6%), Jester (95.9%), Diving48 (81.8%) and UCF-101 (86.2%). In addition, we visualize the feature representation of the MANet using Grad-CAM to validate its effectiveness.
ISSN:2199-4536
2198-6053