DGU-HAO: A Dataset With Daily Life Objects for Comprehensive 3D Human Action Analysis

The importance of a high-quality dataset availability in 3D human action analysis research cannot be overstated. This paper introduces DGU-HAO (Human Action analysis dataset with daily life Objects). This novel 3D human action multi-modality dataset encompasses four distinct data modalities accompan...

Full description

Saved in:
Bibliographic Details
Main Authors: Jiho Park, Junghye Kim, Yujung Gil, Dongho Kim
Format: Article
Language:English
Published: IEEE 2024-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10385044/
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1849340617071525888
author Jiho Park
Junghye Kim
Yujung Gil
Dongho Kim
author_facet Jiho Park
Junghye Kim
Yujung Gil
Dongho Kim
author_sort Jiho Park
collection DOAJ
description The importance of a high-quality dataset availability in 3D human action analysis research cannot be overstated. This paper introduces DGU-HAO (Human Action analysis dataset with daily life Objects). This novel 3D human action multi-modality dataset encompasses four distinct data modalities accompanied by annotation data, including motion capture, RGB video, image, and 3D object modeling data. It features 63 action classes involving interactions with 60 common furniture and electronic devices. Each action class comprises approximately 1,000 motion capture data representing 3D skeleton data and corresponding RGB video and 3D object modeling data, resulting in 67,505 motion capture data samples. It offers comprehensive 3D structural information of the human, RGB images and videos, and point cloud data for 60 objects, collected through the participation of 126 subjects to ensure inclusivity and account for diverse human body types. To validate our dataset, we leveraged MMNet, a 3D human action recognition model, achieving Top-1 accuracy of 91.51% and 92.29% using the skeleton joint and bone methods, respectively. Beyond human action recognition, our versatile dataset is valuable for various 3D human action analysis research endeavors.
format Article
id doaj-art-a154e99be1ea42d99841c7b7ffaf3c78
institution Kabale University
issn 2169-3536
language English
publishDate 2024-01-01
publisher IEEE
record_format Article
series IEEE Access
spelling doaj-art-a154e99be1ea42d99841c7b7ffaf3c782025-08-20T03:43:52ZengIEEEIEEE Access2169-35362024-01-01128780879010.1109/ACCESS.2024.335188810385044DGU-HAO: A Dataset With Daily Life Objects for Comprehensive 3D Human Action AnalysisJiho Park0https://orcid.org/0000-0002-1048-3881Junghye Kim1https://orcid.org/0009-0005-3608-7616Yujung Gil2https://orcid.org/0000-0002-6139-9831Dongho Kim3https://orcid.org/0000-0003-3349-103XDepartment of Artificial Intelligence, Dongguk University, Seoul, South KoreaDepartment of Information and Communication Engineering, Dongguk University, Seoul, South KoreaDepartment of Computer Science and Engineering, Dongguk University, Seoul, South KoreaSoftware Education Institute, Dongguk University, Seoul, South KoreaThe importance of a high-quality dataset availability in 3D human action analysis research cannot be overstated. This paper introduces DGU-HAO (Human Action analysis dataset with daily life Objects). This novel 3D human action multi-modality dataset encompasses four distinct data modalities accompanied by annotation data, including motion capture, RGB video, image, and 3D object modeling data. It features 63 action classes involving interactions with 60 common furniture and electronic devices. Each action class comprises approximately 1,000 motion capture data representing 3D skeleton data and corresponding RGB video and 3D object modeling data, resulting in 67,505 motion capture data samples. It offers comprehensive 3D structural information of the human, RGB images and videos, and point cloud data for 60 objects, collected through the participation of 126 subjects to ensure inclusivity and account for diverse human body types. To validate our dataset, we leveraged MMNet, a 3D human action recognition model, achieving Top-1 accuracy of 91.51% and 92.29% using the skeleton joint and bone methods, respectively. Beyond human action recognition, our versatile dataset is valuable for various 3D human action analysis research endeavors.https://ieeexplore.ieee.org/document/10385044/3D human action analysishuman action recognitionhuman activity understandingmotion capturemulti-modal dataset
spellingShingle Jiho Park
Junghye Kim
Yujung Gil
Dongho Kim
DGU-HAO: A Dataset With Daily Life Objects for Comprehensive 3D Human Action Analysis
IEEE Access
3D human action analysis
human action recognition
human activity understanding
motion capture
multi-modal dataset
title DGU-HAO: A Dataset With Daily Life Objects for Comprehensive 3D Human Action Analysis
title_full DGU-HAO: A Dataset With Daily Life Objects for Comprehensive 3D Human Action Analysis
title_fullStr DGU-HAO: A Dataset With Daily Life Objects for Comprehensive 3D Human Action Analysis
title_full_unstemmed DGU-HAO: A Dataset With Daily Life Objects for Comprehensive 3D Human Action Analysis
title_short DGU-HAO: A Dataset With Daily Life Objects for Comprehensive 3D Human Action Analysis
title_sort dgu hao a dataset with daily life objects for comprehensive 3d human action analysis
topic 3D human action analysis
human action recognition
human activity understanding
motion capture
multi-modal dataset
url https://ieeexplore.ieee.org/document/10385044/
work_keys_str_mv AT jihopark dguhaoadatasetwithdailylifeobjectsforcomprehensive3dhumanactionanalysis
AT junghyekim dguhaoadatasetwithdailylifeobjectsforcomprehensive3dhumanactionanalysis
AT yujunggil dguhaoadatasetwithdailylifeobjectsforcomprehensive3dhumanactionanalysis
AT donghokim dguhaoadatasetwithdailylifeobjectsforcomprehensive3dhumanactionanalysis