A Federated Learning-Based Framework for Accurately Identifying Human Activity in the Environment
Human Activity Recognition (HAR) refers to the detection of people’s activities during daily life using various types of sensors. Machine learning (ML) has contributed to recording many human activities and plays a meaningful role in Human Activity Recognition. Analysis of the data genera...
Saved in:
| Main Authors: | , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2025-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/11052285/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Human Activity Recognition (HAR) refers to the detection of people’s activities during daily life using various types of sensors. Machine learning (ML) has contributed to recording many human activities and plays a meaningful role in Human Activity Recognition. Analysis of the data generated by HAR devices may involve deep learning models and algorithms of different kinds. These data are personal and may include some sensitive data. However, many applications of Human Activity Recognition are implemented using a centralized approach, which may negatively affect user information. Federated learning (FL)—a distributed machine learning approach—aims to distribute machine learning models to edge devices. For this study, we developed a system based on federated learning to support Human Activity Recognition through constructing a model for each client individually, using user-based training data and without data sharing. We built FL models, and conducted experiments based on multiple client divisions—namely, 2, 5, or 10 clients—using both model types. The deep learning models used were Convolutional Neural Network (CNN), Residual Network (ResNet), and Long Short-Term Memory (LSTM), and the performance measures used to evaluate these FL models were the Loss function and Accuracy. Our study yielded promising results: ResNet—which, to the best of our knowledge, has not been used in previous studies in this context—achieved the best results with five clients, attaining a 93.05% Accuracy. |
|---|---|
| ISSN: | 2169-3536 |