A comparative analysis of LSTM models aided with attention and squeeze and excitation blocks for activity recognition
Abstract Human Activity Recognition plays a vital role in various fields, such as healthcare and smart environments. Traditional HAR methods rely on sensor or video data, but sensor-based systems have gained popularity due to their non-intrusive nature. Current challenges in HAR systems include vari...
Saved in:
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
Nature Portfolio
2025-01-01
|
Series: | Scientific Reports |
Subjects: | |
Online Access: | https://doi.org/10.1038/s41598-025-88378-6 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1832571759162818560 |
---|---|
author | Murad Khan Yousef Hossni |
author_facet | Murad Khan Yousef Hossni |
author_sort | Murad Khan |
collection | DOAJ |
description | Abstract Human Activity Recognition plays a vital role in various fields, such as healthcare and smart environments. Traditional HAR methods rely on sensor or video data, but sensor-based systems have gained popularity due to their non-intrusive nature. Current challenges in HAR systems include variability in sensor data influenced by factors like sensor placement, user differences, and environmental conditions. Additionally, imbalanced datasets and computational complexity hinder the performance of these systems in real-world applications. To address these challenges, this paper proposes an LSTM-based HAR model enhanced with attention and squeeze-and-excitation blocks. The LSTM captures temporal dependencies, while the attention mechanism dynamically focuses on important parts of the input sequence. The squeeze-and-excitation block recalibrates channel-wise feature importance, allowing the model to emphasize the most informative features. The proposed model demonstrated a 99% accuracy rate, showcasing its effectiveness in recognizing various activities from sensor data. The integration of attention and squeeze-and-excitation mechanisms further boosted the model’s ability to handle complex datasets. Comparative analysis with existing LSTM models confirms that the proposed approach improves accuracy and reduces computational complexity, making it a highly suitable model for real-world applications. |
format | Article |
id | doaj-art-e0294f9e06fc4954a9a79aa5aada8712 |
institution | Kabale University |
issn | 2045-2322 |
language | English |
publishDate | 2025-01-01 |
publisher | Nature Portfolio |
record_format | Article |
series | Scientific Reports |
spelling | doaj-art-e0294f9e06fc4954a9a79aa5aada87122025-02-02T12:22:57ZengNature PortfolioScientific Reports2045-23222025-01-0115112010.1038/s41598-025-88378-6A comparative analysis of LSTM models aided with attention and squeeze and excitation blocks for activity recognitionMurad Khan0Yousef Hossni1Kuwait College of Science and TechnologyKuwait College of Science and TechnologyAbstract Human Activity Recognition plays a vital role in various fields, such as healthcare and smart environments. Traditional HAR methods rely on sensor or video data, but sensor-based systems have gained popularity due to their non-intrusive nature. Current challenges in HAR systems include variability in sensor data influenced by factors like sensor placement, user differences, and environmental conditions. Additionally, imbalanced datasets and computational complexity hinder the performance of these systems in real-world applications. To address these challenges, this paper proposes an LSTM-based HAR model enhanced with attention and squeeze-and-excitation blocks. The LSTM captures temporal dependencies, while the attention mechanism dynamically focuses on important parts of the input sequence. The squeeze-and-excitation block recalibrates channel-wise feature importance, allowing the model to emphasize the most informative features. The proposed model demonstrated a 99% accuracy rate, showcasing its effectiveness in recognizing various activities from sensor data. The integration of attention and squeeze-and-excitation mechanisms further boosted the model’s ability to handle complex datasets. Comparative analysis with existing LSTM models confirms that the proposed approach improves accuracy and reduces computational complexity, making it a highly suitable model for real-world applications.https://doi.org/10.1038/s41598-025-88378-6Activity recognitionDeep learningLSTMSqueeze and excitationAttentionMulti-head |
spellingShingle | Murad Khan Yousef Hossni A comparative analysis of LSTM models aided with attention and squeeze and excitation blocks for activity recognition Scientific Reports Activity recognition Deep learning LSTM Squeeze and excitation Attention Multi-head |
title | A comparative analysis of LSTM models aided with attention and squeeze and excitation blocks for activity recognition |
title_full | A comparative analysis of LSTM models aided with attention and squeeze and excitation blocks for activity recognition |
title_fullStr | A comparative analysis of LSTM models aided with attention and squeeze and excitation blocks for activity recognition |
title_full_unstemmed | A comparative analysis of LSTM models aided with attention and squeeze and excitation blocks for activity recognition |
title_short | A comparative analysis of LSTM models aided with attention and squeeze and excitation blocks for activity recognition |
title_sort | comparative analysis of lstm models aided with attention and squeeze and excitation blocks for activity recognition |
topic | Activity recognition Deep learning LSTM Squeeze and excitation Attention Multi-head |
url | https://doi.org/10.1038/s41598-025-88378-6 |
work_keys_str_mv | AT muradkhan acomparativeanalysisoflstmmodelsaidedwithattentionandsqueezeandexcitationblocksforactivityrecognition AT yousefhossni acomparativeanalysisoflstmmodelsaidedwithattentionandsqueezeandexcitationblocksforactivityrecognition AT muradkhan comparativeanalysisoflstmmodelsaidedwithattentionandsqueezeandexcitationblocksforactivityrecognition AT yousefhossni comparativeanalysisoflstmmodelsaidedwithattentionandsqueezeandexcitationblocksforactivityrecognition |