Real-Time American Sign Language Interpretation Using Deep Learning and Keypoint Tracking

Communication barriers pose significant challenges for the Deaf and Hard-of-Hearing (DHH) community, limiting their access to essential services, social interactions, and professional opportunities. To bridge this gap, assistive technologies leveraging artificial intelligence (AI) and deep learning...

Full description

Saved in:
Bibliographic Details
Main Authors: Bader Alsharif, Easa Alalwany, Ali Ibrahim, Imad Mahgoub, Mohammad Ilyas
Format: Article
Language:English
Published: MDPI AG 2025-03-01
Series:Sensors
Subjects:
Online Access:https://www.mdpi.com/1424-8220/25/7/2138
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Communication barriers pose significant challenges for the Deaf and Hard-of-Hearing (DHH) community, limiting their access to essential services, social interactions, and professional opportunities. To bridge this gap, assistive technologies leveraging artificial intelligence (AI) and deep learning have gained prominence. This study presents a real-time American Sign Language (ASL) interpretation system that integrates deep learning with keypoint tracking to enhance accessibility and foster inclusivity. By combining the YOLOv11 model for gesture recognition with MediaPipe for precise hand tracking, the system achieves high accuracy in identifying ASL alphabet letters in real time. The proposed approach addresses challenges such as gesture ambiguity, environmental variations, and computational efficiency. Additionally, this system enables users to spell out names and locations, further improving its practical applications. Experimental results demonstrate that the model attains a mean Average Precision (mAP@0.5) of 98.2%, with an inference speed optimized for real-world deployment. This research underscores the critical role of AI-driven assistive technologies in empowering the DHH community by enabling seamless communication and interaction.
ISSN:1424-8220