Pain Level Classification Using Eye-Tracking Metrics and Machine Learning Models
Pain estimation is a critical aspect of healthcare, particularly for patients who are unable to communicate discomfort effectively. The traditional methods, such as self-reporting or observational scales, are subjective and prone to bias. This study proposes a novel system for non-invasive pain esti...
Saved in:
| Main Authors: | , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
MDPI AG
2025-05-01
|
| Series: | Computers |
| Subjects: | |
| Online Access: | https://www.mdpi.com/2073-431X/14/6/212 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1849433313369915392 |
|---|---|
| author | Oussama El Othmani Sami Naouali |
| author_facet | Oussama El Othmani Sami Naouali |
| author_sort | Oussama El Othmani |
| collection | DOAJ |
| description | Pain estimation is a critical aspect of healthcare, particularly for patients who are unable to communicate discomfort effectively. The traditional methods, such as self-reporting or observational scales, are subjective and prone to bias. This study proposes a novel system for non-invasive pain estimation using eye-tracking technology and advanced machine learning models. The methodology begins with preprocessing steps, including resizing, normalization, and data augmentation, to prepare high-quality input face images. DeepLabV3+ is employed for the precise segmentation of the eye and face regions, achieving 95% accuracy. Feature extraction is performed using VGG16, capturing key metrics such as pupil size, blink rate, and saccade velocity. Multiple machine learning models, including Random Forest, SVM, MLP, XGBoost, and NGBoost, are trained on the extracted features. XGBoost achieves the highest classification accuracy of 99.5%, demonstrating its robustness for pain level classification on a scale from 0 to 5. The feature analysis using SHAP values reveals that pupil size and blink rate contribute most to the predictions, with SHAP contribution scores of 0.42 and 0.35, respectively. The loss curves for DeepLabV3+ confirm rapid convergence during training, ensuring reliable segmentation. This work highlights the transformative potential of combining eye-tracking data with machine learning for non-invasive pain estimation, with significant applications in healthcare, human–computer interaction, and assistive technologies. |
| format | Article |
| id | doaj-art-1fdfe579958d4a41871690d24a3803fd |
| institution | Kabale University |
| issn | 2073-431X |
| language | English |
| publishDate | 2025-05-01 |
| publisher | MDPI AG |
| record_format | Article |
| series | Computers |
| spelling | doaj-art-1fdfe579958d4a41871690d24a3803fd2025-08-20T03:27:06ZengMDPI AGComputers2073-431X2025-05-0114621210.3390/computers14060212Pain Level Classification Using Eye-Tracking Metrics and Machine Learning ModelsOussama El Othmani0Sami Naouali1Information Systems Department, Military Academy of Fondouk Jedid, Nabeul 8012, TunisiaInformation Systems Department, College of Computer Science and Information Technology, King Faisal University, Al Ahsa 31982, Saudi ArabiaPain estimation is a critical aspect of healthcare, particularly for patients who are unable to communicate discomfort effectively. The traditional methods, such as self-reporting or observational scales, are subjective and prone to bias. This study proposes a novel system for non-invasive pain estimation using eye-tracking technology and advanced machine learning models. The methodology begins with preprocessing steps, including resizing, normalization, and data augmentation, to prepare high-quality input face images. DeepLabV3+ is employed for the precise segmentation of the eye and face regions, achieving 95% accuracy. Feature extraction is performed using VGG16, capturing key metrics such as pupil size, blink rate, and saccade velocity. Multiple machine learning models, including Random Forest, SVM, MLP, XGBoost, and NGBoost, are trained on the extracted features. XGBoost achieves the highest classification accuracy of 99.5%, demonstrating its robustness for pain level classification on a scale from 0 to 5. The feature analysis using SHAP values reveals that pupil size and blink rate contribute most to the predictions, with SHAP contribution scores of 0.42 and 0.35, respectively. The loss curves for DeepLabV3+ confirm rapid convergence during training, ensuring reliable segmentation. This work highlights the transformative potential of combining eye-tracking data with machine learning for non-invasive pain estimation, with significant applications in healthcare, human–computer interaction, and assistive technologies.https://www.mdpi.com/2073-431X/14/6/212eye-trackingpain estimationmachine learningDeepLabV3+feature extractionXGBoost |
| spellingShingle | Oussama El Othmani Sami Naouali Pain Level Classification Using Eye-Tracking Metrics and Machine Learning Models Computers eye-tracking pain estimation machine learning DeepLabV3+ feature extraction XGBoost |
| title | Pain Level Classification Using Eye-Tracking Metrics and Machine Learning Models |
| title_full | Pain Level Classification Using Eye-Tracking Metrics and Machine Learning Models |
| title_fullStr | Pain Level Classification Using Eye-Tracking Metrics and Machine Learning Models |
| title_full_unstemmed | Pain Level Classification Using Eye-Tracking Metrics and Machine Learning Models |
| title_short | Pain Level Classification Using Eye-Tracking Metrics and Machine Learning Models |
| title_sort | pain level classification using eye tracking metrics and machine learning models |
| topic | eye-tracking pain estimation machine learning DeepLabV3+ feature extraction XGBoost |
| url | https://www.mdpi.com/2073-431X/14/6/212 |
| work_keys_str_mv | AT oussamaelothmani painlevelclassificationusingeyetrackingmetricsandmachinelearningmodels AT saminaouali painlevelclassificationusingeyetrackingmetricsandmachinelearningmodels |