3D Point Cloud from Millimeter-wave Radar for Human Action Recognition: Dataset and Method
Millimeter-wave radar is increasingly being adopted for smart home systems, elder care, and surveillance monitoring, owing to its adaptability to environmental conditions, high resolution, and privacy-preserving capabilities. A key factor in effectively utilizing millimeter-wave radar is the analysi...
Saved in:
Main Authors: | , , , , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
China Science Publishing & Media Ltd. (CSPM)
2025-02-01
|
Series: | Leida xuebao |
Subjects: | |
Online Access: | https://radars.ac.cn/cn/article/doi/10.12000/JR24195 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1832591785727098880 |
---|---|
author | Biao JIN Kangsheng SUN Hao WU Zixuan LI Zhenkai ZHANG Yan CAI Rongmin LI Xiangqun ZHANG Genyuan DU |
author_facet | Biao JIN Kangsheng SUN Hao WU Zixuan LI Zhenkai ZHANG Yan CAI Rongmin LI Xiangqun ZHANG Genyuan DU |
author_sort | Biao JIN |
collection | DOAJ |
description | Millimeter-wave radar is increasingly being adopted for smart home systems, elder care, and surveillance monitoring, owing to its adaptability to environmental conditions, high resolution, and privacy-preserving capabilities. A key factor in effectively utilizing millimeter-wave radar is the analysis of point clouds, which are essential for recognizing human postures. However, the sparse nature of these point clouds poses significant challenges for accurate and efficient human action recognition. To overcome these issues, we present a 3D point cloud dataset tailored for human actions captured using millimeter-wave radar (mmWave-3DPCHM-1.0). This dataset is enhanced with advanced data processing techniques and cutting-edge human action recognition models. Data collection is conducted using Texas Instruments (TI)’s IWR1443-ISK and Vayyar’s vBlu radio imaging module, covering 12 common human actions, including walking, waving, standing, and falling. At the core of our approach is the Point EdgeConv and Transformer (PETer) network, which integrates edge convolution with transformer models. For each 3D point cloud frame, PETer constructs a locally directed neighborhood graph through edge convolution to extract spatial geometric features effectively. The network then leverages a series of Transformer encoding models to uncover temporal relationships across multiple point cloud frames. Extensive experiments reveal that the PETer network achieves exceptional recognition rates of 98.77% on the TI dataset and 99.51% on the Vayyar dataset, outperforming the traditional optimal baseline model by approximately 5%. With a compact model size of only 1.09 MB, PETer is well-suited for deployment on edge devices, providing an efficient solution for real-time human action recognition in resource-constrained environments. |
format | Article |
id | doaj-art-a2746036b8b7481da0e4802ac1c6d3ea |
institution | Kabale University |
issn | 2095-283X |
language | English |
publishDate | 2025-02-01 |
publisher | China Science Publishing & Media Ltd. (CSPM) |
record_format | Article |
series | Leida xuebao |
spelling | doaj-art-a2746036b8b7481da0e4802ac1c6d3ea2025-01-22T06:12:25ZengChina Science Publishing & Media Ltd. (CSPM)Leida xuebao2095-283X2025-02-01141738910.12000/JR24195R241953D Point Cloud from Millimeter-wave Radar for Human Action Recognition: Dataset and MethodBiao JIN0Kangsheng SUN1Hao WU2Zixuan LI3Zhenkai ZHANG4Yan CAI5Rongmin LI6Xiangqun ZHANG7Genyuan DU8Jiangsu University of Science and Technology, Ocean College, Zhenjiang 212003, ChinaJiangsu University of Science and Technology, Ocean College, Zhenjiang 212003, ChinaJiangsu University of Science and Technology, Ocean College, Zhenjiang 212003, ChinaJiangsu University of Science and Technology, Ocean College, Zhenjiang 212003, ChinaJiangsu University of Science and Technology, Ocean College, Zhenjiang 212003, ChinaSuzhou Zadar Vision Technology Co., Ltd., Suzhou 215000, ChinaSuzhou Zadar Vision Technology Co., Ltd., Suzhou 215000, ChinaXuchang University, College of Information Engineering, Xuchang 461000, ChinaXuchang University, College of Information Engineering, Xuchang 461000, ChinaMillimeter-wave radar is increasingly being adopted for smart home systems, elder care, and surveillance monitoring, owing to its adaptability to environmental conditions, high resolution, and privacy-preserving capabilities. A key factor in effectively utilizing millimeter-wave radar is the analysis of point clouds, which are essential for recognizing human postures. However, the sparse nature of these point clouds poses significant challenges for accurate and efficient human action recognition. To overcome these issues, we present a 3D point cloud dataset tailored for human actions captured using millimeter-wave radar (mmWave-3DPCHM-1.0). This dataset is enhanced with advanced data processing techniques and cutting-edge human action recognition models. Data collection is conducted using Texas Instruments (TI)’s IWR1443-ISK and Vayyar’s vBlu radio imaging module, covering 12 common human actions, including walking, waving, standing, and falling. At the core of our approach is the Point EdgeConv and Transformer (PETer) network, which integrates edge convolution with transformer models. For each 3D point cloud frame, PETer constructs a locally directed neighborhood graph through edge convolution to extract spatial geometric features effectively. The network then leverages a series of Transformer encoding models to uncover temporal relationships across multiple point cloud frames. Extensive experiments reveal that the PETer network achieves exceptional recognition rates of 98.77% on the TI dataset and 99.51% on the Vayyar dataset, outperforming the traditional optimal baseline model by approximately 5%. With a compact model size of only 1.09 MB, PETer is well-suited for deployment on edge devices, providing an efficient solution for real-time human action recognition in resource-constrained environments.https://radars.ac.cn/cn/article/doi/10.12000/JR24195human action recognition (har)millimeter-wave radar3d point clouddeep learningconvolutional neural networks (cnn) |
spellingShingle | Biao JIN Kangsheng SUN Hao WU Zixuan LI Zhenkai ZHANG Yan CAI Rongmin LI Xiangqun ZHANG Genyuan DU 3D Point Cloud from Millimeter-wave Radar for Human Action Recognition: Dataset and Method Leida xuebao human action recognition (har) millimeter-wave radar 3d point cloud deep learning convolutional neural networks (cnn) |
title | 3D Point Cloud from Millimeter-wave Radar for Human Action Recognition: Dataset and Method |
title_full | 3D Point Cloud from Millimeter-wave Radar for Human Action Recognition: Dataset and Method |
title_fullStr | 3D Point Cloud from Millimeter-wave Radar for Human Action Recognition: Dataset and Method |
title_full_unstemmed | 3D Point Cloud from Millimeter-wave Radar for Human Action Recognition: Dataset and Method |
title_short | 3D Point Cloud from Millimeter-wave Radar for Human Action Recognition: Dataset and Method |
title_sort | 3d point cloud from millimeter wave radar for human action recognition dataset and method |
topic | human action recognition (har) millimeter-wave radar 3d point cloud deep learning convolutional neural networks (cnn) |
url | https://radars.ac.cn/cn/article/doi/10.12000/JR24195 |
work_keys_str_mv | AT biaojin 3dpointcloudfrommillimeterwaveradarforhumanactionrecognitiondatasetandmethod AT kangshengsun 3dpointcloudfrommillimeterwaveradarforhumanactionrecognitiondatasetandmethod AT haowu 3dpointcloudfrommillimeterwaveradarforhumanactionrecognitiondatasetandmethod AT zixuanli 3dpointcloudfrommillimeterwaveradarforhumanactionrecognitiondatasetandmethod AT zhenkaizhang 3dpointcloudfrommillimeterwaveradarforhumanactionrecognitiondatasetandmethod AT yancai 3dpointcloudfrommillimeterwaveradarforhumanactionrecognitiondatasetandmethod AT rongminli 3dpointcloudfrommillimeterwaveradarforhumanactionrecognitiondatasetandmethod AT xiangqunzhang 3dpointcloudfrommillimeterwaveradarforhumanactionrecognitiondatasetandmethod AT genyuandu 3dpointcloudfrommillimeterwaveradarforhumanactionrecognitiondatasetandmethod |