Comparing Cross-Subject Performance on Human Activities Recognition Using Learning Models
Human activities recognition (HAR) plays a vital role in fields like ambient assisted living and health monitoring, in which cross-subject recognition is one of the main challenges coming from the diversity of various users. Although recent studies have achieved satisfactory results in a non-cross-s...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2022-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/9878329/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1841533395625050112 |
---|---|
author | Zhe Yang Mengjie Qu Yun Pan Ruohong Huan |
author_facet | Zhe Yang Mengjie Qu Yun Pan Ruohong Huan |
author_sort | Zhe Yang |
collection | DOAJ |
description | Human activities recognition (HAR) plays a vital role in fields like ambient assisted living and health monitoring, in which cross-subject recognition is one of the main challenges coming from the diversity of various users. Although recent studies have achieved satisfactory results in a non-cross-subject condition, the recognition performance has significant degradation under the cross-subject criterion. In this paper, we evaluate three traditional machine learning methods and five deep neural network architectures under the same metrics on three popular HAR datasets: mHealth, PAMAP2, and UCIDSADS. The experimental results show that traditional machine learning approaches are generally more robust to the new subject scenarios under strict leave-one-subject-out cross-validation. Extra analysis indicates that hand-crafted features are one major reason for the better performance of traditional machine learning on cross-subject HAR, while deep learning is more prone to learning subject-dependent features under an end-to-end training process. A novel training strategy for decision-tree-based methods is also proposed in this paper, resulting in an improvement on the random forest model which achieves competitive performance at an average F1-score (accuracy) of 94.49% (95.09%), 91.64% (92.21%), and 92.70% (93.29%) on the three datasets, compared with state-of-the-art solutions for cross-subject HAR. |
format | Article |
id | doaj-art-3215f39db2184112b134e2f18c1f20e8 |
institution | Kabale University |
issn | 2169-3536 |
language | English |
publishDate | 2022-01-01 |
publisher | IEEE |
record_format | Article |
series | IEEE Access |
spelling | doaj-art-3215f39db2184112b134e2f18c1f20e82025-01-16T00:01:11ZengIEEEIEEE Access2169-35362022-01-0110951799519610.1109/ACCESS.2022.32047399878329Comparing Cross-Subject Performance on Human Activities Recognition Using Learning ModelsZhe Yang0https://orcid.org/0000-0001-7246-0012Mengjie Qu1Yun Pan2https://orcid.org/0000-0002-9335-4291Ruohong Huan3https://orcid.org/0000-0003-2555-343XCollege of Information Science and Electronic Engineering, Zhejiang University, Hangzhou, ChinaCollege of Information Science and Electronic Engineering, Zhejiang University, Hangzhou, ChinaCollege of Information Science and Electronic Engineering, Zhejiang University, Hangzhou, ChinaCollege of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, ChinaHuman activities recognition (HAR) plays a vital role in fields like ambient assisted living and health monitoring, in which cross-subject recognition is one of the main challenges coming from the diversity of various users. Although recent studies have achieved satisfactory results in a non-cross-subject condition, the recognition performance has significant degradation under the cross-subject criterion. In this paper, we evaluate three traditional machine learning methods and five deep neural network architectures under the same metrics on three popular HAR datasets: mHealth, PAMAP2, and UCIDSADS. The experimental results show that traditional machine learning approaches are generally more robust to the new subject scenarios under strict leave-one-subject-out cross-validation. Extra analysis indicates that hand-crafted features are one major reason for the better performance of traditional machine learning on cross-subject HAR, while deep learning is more prone to learning subject-dependent features under an end-to-end training process. A novel training strategy for decision-tree-based methods is also proposed in this paper, resulting in an improvement on the random forest model which achieves competitive performance at an average F1-score (accuracy) of 94.49% (95.09%), 91.64% (92.21%), and 92.70% (93.29%) on the three datasets, compared with state-of-the-art solutions for cross-subject HAR.https://ieeexplore.ieee.org/document/9878329/Cross-subjectdeep learninghuman activity recognitionleave one subject outtraditional machine learning |
spellingShingle | Zhe Yang Mengjie Qu Yun Pan Ruohong Huan Comparing Cross-Subject Performance on Human Activities Recognition Using Learning Models IEEE Access Cross-subject deep learning human activity recognition leave one subject out traditional machine learning |
title | Comparing Cross-Subject Performance on Human Activities Recognition Using Learning Models |
title_full | Comparing Cross-Subject Performance on Human Activities Recognition Using Learning Models |
title_fullStr | Comparing Cross-Subject Performance on Human Activities Recognition Using Learning Models |
title_full_unstemmed | Comparing Cross-Subject Performance on Human Activities Recognition Using Learning Models |
title_short | Comparing Cross-Subject Performance on Human Activities Recognition Using Learning Models |
title_sort | comparing cross subject performance on human activities recognition using learning models |
topic | Cross-subject deep learning human activity recognition leave one subject out traditional machine learning |
url | https://ieeexplore.ieee.org/document/9878329/ |
work_keys_str_mv | AT zheyang comparingcrosssubjectperformanceonhumanactivitiesrecognitionusinglearningmodels AT mengjiequ comparingcrosssubjectperformanceonhumanactivitiesrecognitionusinglearningmodels AT yunpan comparingcrosssubjectperformanceonhumanactivitiesrecognitionusinglearningmodels AT ruohonghuan comparingcrosssubjectperformanceonhumanactivitiesrecognitionusinglearningmodels |