Experimental Study of a Deep-Learning RGB-D Tracker for Virtual Remote Human Model Reconstruction

Tracking movements of the body in a natural living environment of a person is a challenging undertaking. Such tracking information can be used as a part of detecting any onsets of anomalies in movement patterns or as a part of a remote monitoring environment. The tracking information can be mapped a...

Full description

Saved in:
Bibliographic Details
Main Authors: Shahram Payandeh, Jeffrey Wael
Format: Article
Language:English
Published: Wiley 2021-01-01
Series:International Journal of Telemedicine and Applications
Online Access:http://dx.doi.org/10.1155/2021/5551753
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1849307530072686592
author Shahram Payandeh
Jeffrey Wael
author_facet Shahram Payandeh
Jeffrey Wael
author_sort Shahram Payandeh
collection DOAJ
description Tracking movements of the body in a natural living environment of a person is a challenging undertaking. Such tracking information can be used as a part of detecting any onsets of anomalies in movement patterns or as a part of a remote monitoring environment. The tracking information can be mapped and visualized using a virtual avatar model of the tracked person. This paper presents an initial novel experimental study of using a commercially available deep-learning body tracking system based on an RGB-D sensor for virtual human model reconstruction. We carried out our study in an indoor environment under natural conditions. To study the performance of the tracker, we experimentally study the output of the tracker which is in the form of a skeleton (stick-figure) data structure under several conditions in order to observe its robustness and identify its drawbacks. In addition, we show and study how the generic model can be mapped for virtual human model reconstruction. It was found that the deep-learning tracking approach using an RGB-D sensor is susceptible to various environmental factors which result in the absence and presence of noise in estimating the resulting locations of skeleton joints. This as a result introduces challenges for further virtual model reconstruction. We present an initial approach for compensating for such noise resulting in a better temporal variation of the joint coordinates in the captured skeleton data. We explored how the extracted joint position information of the skeleton data can be used as a part of the virtual human model reconstruction.
format Article
id doaj-art-2b36d202c7dc4288b73161ceea318f85
institution Kabale University
issn 1687-6415
1687-6423
language English
publishDate 2021-01-01
publisher Wiley
record_format Article
series International Journal of Telemedicine and Applications
spelling doaj-art-2b36d202c7dc4288b73161ceea318f852025-08-20T03:54:43ZengWileyInternational Journal of Telemedicine and Applications1687-64151687-64232021-01-01202110.1155/2021/55517535551753Experimental Study of a Deep-Learning RGB-D Tracker for Virtual Remote Human Model ReconstructionShahram Payandeh0Jeffrey Wael1Networked Robotics and Sensing Laboratory, School of Engineering Science, Simon Fraser University, Burnaby, British Columbia, V5A 1S6, CanadaNetworked Robotics and Sensing Laboratory, School of Engineering Science, Simon Fraser University, Burnaby, British Columbia, V5A 1S6, CanadaTracking movements of the body in a natural living environment of a person is a challenging undertaking. Such tracking information can be used as a part of detecting any onsets of anomalies in movement patterns or as a part of a remote monitoring environment. The tracking information can be mapped and visualized using a virtual avatar model of the tracked person. This paper presents an initial novel experimental study of using a commercially available deep-learning body tracking system based on an RGB-D sensor for virtual human model reconstruction. We carried out our study in an indoor environment under natural conditions. To study the performance of the tracker, we experimentally study the output of the tracker which is in the form of a skeleton (stick-figure) data structure under several conditions in order to observe its robustness and identify its drawbacks. In addition, we show and study how the generic model can be mapped for virtual human model reconstruction. It was found that the deep-learning tracking approach using an RGB-D sensor is susceptible to various environmental factors which result in the absence and presence of noise in estimating the resulting locations of skeleton joints. This as a result introduces challenges for further virtual model reconstruction. We present an initial approach for compensating for such noise resulting in a better temporal variation of the joint coordinates in the captured skeleton data. We explored how the extracted joint position information of the skeleton data can be used as a part of the virtual human model reconstruction.http://dx.doi.org/10.1155/2021/5551753
spellingShingle Shahram Payandeh
Jeffrey Wael
Experimental Study of a Deep-Learning RGB-D Tracker for Virtual Remote Human Model Reconstruction
International Journal of Telemedicine and Applications
title Experimental Study of a Deep-Learning RGB-D Tracker for Virtual Remote Human Model Reconstruction
title_full Experimental Study of a Deep-Learning RGB-D Tracker for Virtual Remote Human Model Reconstruction
title_fullStr Experimental Study of a Deep-Learning RGB-D Tracker for Virtual Remote Human Model Reconstruction
title_full_unstemmed Experimental Study of a Deep-Learning RGB-D Tracker for Virtual Remote Human Model Reconstruction
title_short Experimental Study of a Deep-Learning RGB-D Tracker for Virtual Remote Human Model Reconstruction
title_sort experimental study of a deep learning rgb d tracker for virtual remote human model reconstruction
url http://dx.doi.org/10.1155/2021/5551753
work_keys_str_mv AT shahrampayandeh experimentalstudyofadeeplearningrgbdtrackerforvirtualremotehumanmodelreconstruction
AT jeffreywael experimentalstudyofadeeplearningrgbdtrackerforvirtualremotehumanmodelreconstruction