A Deep Learning Framework for Healthy Lifestyle Monitoring and Outdoor Localization

The green research field of ubiquitous computing has been able to draw and hold academics’ interest for a while. Recognition and localization of human locomotion have also been widely developed as ubiquitous computing applications. Personal safety, behavior analysis, entertainment, and he...

Full description

Saved in:
Bibliographic Details
Main Authors: Mehrab Rafiq, Naif S. Alshammari, Haifa F. Alhasson, Dina Abdulaziz Alhammadi, Mohammed Alshehri, Ahmad Jalal, Hui Liu
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/11015451/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The green research field of ubiquitous computing has been able to draw and hold academics’ interest for a while. Recognition and localization of human locomotion have also been widely developed as ubiquitous computing applications. Personal safety, behavior analysis, entertainment, and healthcare monitoring all utilize these apps. A key component of several fields, such as robots, sports, healthcare, and security, is human locomotion recognition (HLR). Researchers and engineers have been trying to use the increasing popularity of wearable technology, especially environmental sensors and Inertial Measurement Units (IMUs), to identify and categorize human locomotion activities in an accurate and efficient manner. The capabilities of smartphones and wearable technology have increased due to advancements in sensing technology. Inertial sensors like gyroscopes and accelerometers are now frequently seen in smartphones. These sensors can now be utilized for a wide range of purposes, while their original purpose was to improve gadget features. Using smartphone IMU, ambient, GPS, and audio sensor data from two publicly available benchmark datasets—the Extrasensory dataset and the Domino dataset—this study proposes a sophisticated approach for human locomotion and localization detection. In the preprocessing stage, a Chebyshev Type 2 filter was used for windowing and segmentation, while a Hamming window was applied. Feature extraction was divided into two parts: for actions, the extracted features included Fast Fourier Transform (FFT), State Space Correlation Entropy (SSCE), Maximum Lyapunov Exponent (MLE), and Auto Regression; for localization-based features, Recursive Feature Elimination (RFE), step count, heading angle, and step length were employed. Kernel Fisher Discriminant Analysis was applied for feature optimization, and a deep neural network was utilized for feature classification. The proposed system achieved an overall classification accuracy of 88.4% on the Extrasensory dataset and 86.4% on the Domino dataset, outperforming several existing state-of-the-art methods. These results highlight the effectiveness of the preprocessing pipeline and feature optimization techniques in enhancing recognition and localization performance. The experimental evaluation confirms the robustness of the system across diverse activities and environments, making it suitable for real-world ubiquitous computing applications.
ISSN:2169-3536