Research on Discriminative Skeleton-Based Action Recognition in Spatiotemporal Fusion and Human-Robot Interaction

A novel posture motion-based spatiotemporal fused graph convolutional network (PM-STGCN) is presented for skeleton-based action recognition. Existing methods on skeleton-based action recognition focus on independently calculating the joint information in single frame and motion information of joints...

Full description

Saved in:
Bibliographic Details
Main Authors: Qiubo Zhong, Caiming Zheng, Haoxiang Zhang
Format: Article
Language:English
Published: Wiley 2020-01-01
Series:Complexity
Online Access:http://dx.doi.org/10.1155/2020/8717942
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1832565970116280320
author Qiubo Zhong
Caiming Zheng
Haoxiang Zhang
author_facet Qiubo Zhong
Caiming Zheng
Haoxiang Zhang
author_sort Qiubo Zhong
collection DOAJ
description A novel posture motion-based spatiotemporal fused graph convolutional network (PM-STGCN) is presented for skeleton-based action recognition. Existing methods on skeleton-based action recognition focus on independently calculating the joint information in single frame and motion information of joints between adjacent frames from the human body skeleton structure and then combine the classification results. However, that does not take into consideration of the complicated temporal and spatial relationship of the human body action sequence, so they are not very efficient in distinguishing similar actions. In this work, we enhance the ability of distinguishing similar actions by focusing on spatiotemporal fusion and adaptive feature extraction for high discrimination information. Firstly, the local posture motion-based attention (LPM-TAM) module is proposed for the purpose of suppressing the skeleton sequence data with a low amount of motion in the temporal domain, and the representation of motion posture features is concentrated. Besides, the local posture motion-based channel attention module (LPM-CAM) is introduced to make use of the strongly discriminative representation between different action classes of similarity. Finally, the posture motion-based spatiotemporal fusion (PM-STF) module is constructed which fuses the spatiotemporal skeleton data by filtering out the low-information sequence and enhances the posture motion features adaptively with high discrimination. Extensive experiments have been conducted, and the results demonstrate that the proposed model is superior to the commonly used action recognition methods. The designed human-robot interaction system based on action recognition has competitive performance compared with the speech interaction system.
format Article
id doaj-art-038c567dffc342738ed364cb12bd4303
institution Kabale University
issn 1076-2787
1099-0526
language English
publishDate 2020-01-01
publisher Wiley
record_format Article
series Complexity
spelling doaj-art-038c567dffc342738ed364cb12bd43032025-02-03T01:05:22ZengWileyComplexity1076-27871099-05262020-01-01202010.1155/2020/87179428717942Research on Discriminative Skeleton-Based Action Recognition in Spatiotemporal Fusion and Human-Robot InteractionQiubo Zhong0Caiming Zheng1Haoxiang Zhang2Robotics Institute, Ningbo University of Technology, Ningbo 315211, ChinaRobotics Institute, Ningbo University of Technology, Ningbo 315211, ChinaRobotics Institute, Ningbo University of Technology, Ningbo 315211, ChinaA novel posture motion-based spatiotemporal fused graph convolutional network (PM-STGCN) is presented for skeleton-based action recognition. Existing methods on skeleton-based action recognition focus on independently calculating the joint information in single frame and motion information of joints between adjacent frames from the human body skeleton structure and then combine the classification results. However, that does not take into consideration of the complicated temporal and spatial relationship of the human body action sequence, so they are not very efficient in distinguishing similar actions. In this work, we enhance the ability of distinguishing similar actions by focusing on spatiotemporal fusion and adaptive feature extraction for high discrimination information. Firstly, the local posture motion-based attention (LPM-TAM) module is proposed for the purpose of suppressing the skeleton sequence data with a low amount of motion in the temporal domain, and the representation of motion posture features is concentrated. Besides, the local posture motion-based channel attention module (LPM-CAM) is introduced to make use of the strongly discriminative representation between different action classes of similarity. Finally, the posture motion-based spatiotemporal fusion (PM-STF) module is constructed which fuses the spatiotemporal skeleton data by filtering out the low-information sequence and enhances the posture motion features adaptively with high discrimination. Extensive experiments have been conducted, and the results demonstrate that the proposed model is superior to the commonly used action recognition methods. The designed human-robot interaction system based on action recognition has competitive performance compared with the speech interaction system.http://dx.doi.org/10.1155/2020/8717942
spellingShingle Qiubo Zhong
Caiming Zheng
Haoxiang Zhang
Research on Discriminative Skeleton-Based Action Recognition in Spatiotemporal Fusion and Human-Robot Interaction
Complexity
title Research on Discriminative Skeleton-Based Action Recognition in Spatiotemporal Fusion and Human-Robot Interaction
title_full Research on Discriminative Skeleton-Based Action Recognition in Spatiotemporal Fusion and Human-Robot Interaction
title_fullStr Research on Discriminative Skeleton-Based Action Recognition in Spatiotemporal Fusion and Human-Robot Interaction
title_full_unstemmed Research on Discriminative Skeleton-Based Action Recognition in Spatiotemporal Fusion and Human-Robot Interaction
title_short Research on Discriminative Skeleton-Based Action Recognition in Spatiotemporal Fusion and Human-Robot Interaction
title_sort research on discriminative skeleton based action recognition in spatiotemporal fusion and human robot interaction
url http://dx.doi.org/10.1155/2020/8717942
work_keys_str_mv AT qiubozhong researchondiscriminativeskeletonbasedactionrecognitioninspatiotemporalfusionandhumanrobotinteraction
AT caimingzheng researchondiscriminativeskeletonbasedactionrecognitioninspatiotemporalfusionandhumanrobotinteraction
AT haoxiangzhang researchondiscriminativeskeletonbasedactionrecognitioninspatiotemporalfusionandhumanrobotinteraction