Applying MLP-Mixer and gMLP to Human Activity Recognition
The development of deep learning has led to the proposal of various models for human activity recognition (HAR). Convolutional neural networks (CNNs), initially proposed for computer vision tasks, are examples of models applied to sensor data. Recently, high-performing models based on Transformers a...
Saved in:
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2025-01-01
|
Series: | Sensors |
Subjects: | |
Online Access: | https://www.mdpi.com/1424-8220/25/2/311 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1832587555940335616 |
---|---|
author | Takeru Miyoshi Makoto Koshino Hidetaka Nambo |
author_facet | Takeru Miyoshi Makoto Koshino Hidetaka Nambo |
author_sort | Takeru Miyoshi |
collection | DOAJ |
description | The development of deep learning has led to the proposal of various models for human activity recognition (HAR). Convolutional neural networks (CNNs), initially proposed for computer vision tasks, are examples of models applied to sensor data. Recently, high-performing models based on Transformers and multi-layer perceptrons (MLPs) have also been proposed. When applying these methods to sensor data, we often initialize hyperparameters with values optimized for image processing tasks as a starting point. We suggest that comparable accuracy could be achieved with fewer parameters for sensor data, which typically have lower dimensionality than image data. Reducing the number of parameters would decrease memory requirements and computational complexity by reducing the model size. We evaluated the performance of two MLP-based models, MLP-Mixer and gMLP, by reducing the values of hyperparameters in their MLP layers from those proposed in the respective original papers. The results of this study suggest that the performance of MLP-based models is positively correlated with the number of parameters. Furthermore, these MLP-based models demonstrate improved computational efficiency for specific HAR tasks compared to representative CNNs. |
format | Article |
id | doaj-art-2ed1f4f5eddf43c2a3d41687833fae38 |
institution | Kabale University |
issn | 1424-8220 |
language | English |
publishDate | 2025-01-01 |
publisher | MDPI AG |
record_format | Article |
series | Sensors |
spelling | doaj-art-2ed1f4f5eddf43c2a3d41687833fae382025-01-24T13:48:27ZengMDPI AGSensors1424-82202025-01-0125231110.3390/s25020311Applying MLP-Mixer and gMLP to Human Activity RecognitionTakeru Miyoshi0Makoto Koshino1Hidetaka Nambo2Graduate School of National Science and Technology, Kanazawa University, Kanazawa 920-1192, JapanNational Institute of Technology, Ishikawa College, Tsubata 929-0392, JapanGraduate School of National Science and Technology, Kanazawa University, Kanazawa 920-1192, JapanThe development of deep learning has led to the proposal of various models for human activity recognition (HAR). Convolutional neural networks (CNNs), initially proposed for computer vision tasks, are examples of models applied to sensor data. Recently, high-performing models based on Transformers and multi-layer perceptrons (MLPs) have also been proposed. When applying these methods to sensor data, we often initialize hyperparameters with values optimized for image processing tasks as a starting point. We suggest that comparable accuracy could be achieved with fewer parameters for sensor data, which typically have lower dimensionality than image data. Reducing the number of parameters would decrease memory requirements and computational complexity by reducing the model size. We evaluated the performance of two MLP-based models, MLP-Mixer and gMLP, by reducing the values of hyperparameters in their MLP layers from those proposed in the respective original papers. The results of this study suggest that the performance of MLP-based models is positively correlated with the number of parameters. Furthermore, these MLP-based models demonstrate improved computational efficiency for specific HAR tasks compared to representative CNNs.https://www.mdpi.com/1424-8220/25/2/311human activity recognitionmulti-layer perceptrons (MLPs)MLP-mixergMLPsmartphoneinertial measurement unit (IMU) |
spellingShingle | Takeru Miyoshi Makoto Koshino Hidetaka Nambo Applying MLP-Mixer and gMLP to Human Activity Recognition Sensors human activity recognition multi-layer perceptrons (MLPs) MLP-mixer gMLP smartphone inertial measurement unit (IMU) |
title | Applying MLP-Mixer and gMLP to Human Activity Recognition |
title_full | Applying MLP-Mixer and gMLP to Human Activity Recognition |
title_fullStr | Applying MLP-Mixer and gMLP to Human Activity Recognition |
title_full_unstemmed | Applying MLP-Mixer and gMLP to Human Activity Recognition |
title_short | Applying MLP-Mixer and gMLP to Human Activity Recognition |
title_sort | applying mlp mixer and gmlp to human activity recognition |
topic | human activity recognition multi-layer perceptrons (MLPs) MLP-mixer gMLP smartphone inertial measurement unit (IMU) |
url | https://www.mdpi.com/1424-8220/25/2/311 |
work_keys_str_mv | AT takerumiyoshi applyingmlpmixerandgmlptohumanactivityrecognition AT makotokoshino applyingmlpmixerandgmlptohumanactivityrecognition AT hidetakanambo applyingmlpmixerandgmlptohumanactivityrecognition |