CNN-Based Object Recognition and Tracking System to Assist Visually Impaired People
Visually impaired persons (VIPs) comprise a significant portion of the population, and they are present around the globe and in every part of the world. In recent times, technology proved its presence in every domain, and innovative devices assist humans in their daily lives. In this work, a smart a...
Saved in:
Main Authors: | , , , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2022-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/9698080/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1832582283040653312 |
---|---|
author | Fahad Ashiq Muhammad Asif Maaz Bin Ahmad Sadia Zafar Khalid Masood Toqeer Mahmood Muhammad Tariq Mahmood Ik Hyun Lee |
author_facet | Fahad Ashiq Muhammad Asif Maaz Bin Ahmad Sadia Zafar Khalid Masood Toqeer Mahmood Muhammad Tariq Mahmood Ik Hyun Lee |
author_sort | Fahad Ashiq |
collection | DOAJ |
description | Visually impaired persons (VIPs) comprise a significant portion of the population, and they are present around the globe and in every part of the world. In recent times, technology proved its presence in every domain, and innovative devices assist humans in their daily lives. In this work, a smart and intelligent system is designed for VIPs to assist mobility and ensure their safety. The proposed system provides navigation in real-time using an automated voice. Though VIPs wouldn’t be able to see objects in their surroundings, they can sense and visualize the roaming environment. Moreover, a web-based application is developed to ensure their safety. The user of this application can turn the on-demand function for sharing his/her location with the family while compromising privacy. Through this application, the family members of VIPs would be able to track their movement (get location and snapshots) while being at their homes. Hence, the device allows VIPs to visualize the environment and ensure their security. Such a comprehensive device was a missing link in the existing literature. The application uses MobileNet architecture due to its low computational complexity to run on low-power end devices. To assess the efficacy of the proposed system, six pilot studies have been performed that reflected satisfactory results. For object detection and recognition, a deep Convolution Neural Network (CNN) model is employed with an accuracy of 83.3%, whereas the dataset contains more than 1000 categories. Moreover, a score-based quantitative comparative analysis is performed using the supported features of devices. It is found that the proposed system has outperformed the existing devices having a total score of 9.1/10, which is 8% higher than the second-best. |
format | Article |
id | doaj-art-08ec6cc44c024cba960e2901ea523a19 |
institution | Kabale University |
issn | 2169-3536 |
language | English |
publishDate | 2022-01-01 |
publisher | IEEE |
record_format | Article |
series | IEEE Access |
spelling | doaj-art-08ec6cc44c024cba960e2901ea523a192025-01-30T00:01:07ZengIEEEIEEE Access2169-35362022-01-0110148191483410.1109/ACCESS.2022.31480369698080CNN-Based Object Recognition and Tracking System to Assist Visually Impaired PeopleFahad Ashiq0https://orcid.org/0000-0002-4054-0365Muhammad Asif1https://orcid.org/0000-0001-6811-0044Maaz Bin Ahmad2Sadia Zafar3Khalid Masood4Toqeer Mahmood5https://orcid.org/0000-0003-3125-2430Muhammad Tariq Mahmood6https://orcid.org/0000-0001-6814-3137Ik Hyun Lee7https://orcid.org/0000-0002-0605-7572Department of Computer Science, Lahore Garrison University, Lahore, PakistanDepartment of Computer Science, Lahore Garrison University, Lahore, PakistanCollege of Computing and Information Sciences, Karachi Institute of Economics and Technology, Karachi, PakistanDepartment of Computer Science, Lahore Garrison University, Lahore, PakistanDepartment of Computer Science, Lahore Garrison University, Lahore, PakistanDepartment of Computer Science, National Textile University, Faisalabad, PakistanFuture Convergence Engineering, Korea University of Technology and Education, Byeongcheonmyeon, Cheonan, Republic of KoreaDepartment of Mechatronics Engineering, Korea Polytechnic University, Siheung, Gyeonggido, Republic of KoreaVisually impaired persons (VIPs) comprise a significant portion of the population, and they are present around the globe and in every part of the world. In recent times, technology proved its presence in every domain, and innovative devices assist humans in their daily lives. In this work, a smart and intelligent system is designed for VIPs to assist mobility and ensure their safety. The proposed system provides navigation in real-time using an automated voice. Though VIPs wouldn’t be able to see objects in their surroundings, they can sense and visualize the roaming environment. Moreover, a web-based application is developed to ensure their safety. The user of this application can turn the on-demand function for sharing his/her location with the family while compromising privacy. Through this application, the family members of VIPs would be able to track their movement (get location and snapshots) while being at their homes. Hence, the device allows VIPs to visualize the environment and ensure their security. Such a comprehensive device was a missing link in the existing literature. The application uses MobileNet architecture due to its low computational complexity to run on low-power end devices. To assess the efficacy of the proposed system, six pilot studies have been performed that reflected satisfactory results. For object detection and recognition, a deep Convolution Neural Network (CNN) model is employed with an accuracy of 83.3%, whereas the dataset contains more than 1000 categories. Moreover, a score-based quantitative comparative analysis is performed using the supported features of devices. It is found that the proposed system has outperformed the existing devices having a total score of 9.1/10, which is 8% higher than the second-best.https://ieeexplore.ieee.org/document/9698080/Object detectionsecurity surveillancevisually impaired personsCNNdeep learning |
spellingShingle | Fahad Ashiq Muhammad Asif Maaz Bin Ahmad Sadia Zafar Khalid Masood Toqeer Mahmood Muhammad Tariq Mahmood Ik Hyun Lee CNN-Based Object Recognition and Tracking System to Assist Visually Impaired People IEEE Access Object detection security surveillance visually impaired persons CNN deep learning |
title | CNN-Based Object Recognition and Tracking System to Assist Visually Impaired People |
title_full | CNN-Based Object Recognition and Tracking System to Assist Visually Impaired People |
title_fullStr | CNN-Based Object Recognition and Tracking System to Assist Visually Impaired People |
title_full_unstemmed | CNN-Based Object Recognition and Tracking System to Assist Visually Impaired People |
title_short | CNN-Based Object Recognition and Tracking System to Assist Visually Impaired People |
title_sort | cnn based object recognition and tracking system to assist visually impaired people |
topic | Object detection security surveillance visually impaired persons CNN deep learning |
url | https://ieeexplore.ieee.org/document/9698080/ |
work_keys_str_mv | AT fahadashiq cnnbasedobjectrecognitionandtrackingsystemtoassistvisuallyimpairedpeople AT muhammadasif cnnbasedobjectrecognitionandtrackingsystemtoassistvisuallyimpairedpeople AT maazbinahmad cnnbasedobjectrecognitionandtrackingsystemtoassistvisuallyimpairedpeople AT sadiazafar cnnbasedobjectrecognitionandtrackingsystemtoassistvisuallyimpairedpeople AT khalidmasood cnnbasedobjectrecognitionandtrackingsystemtoassistvisuallyimpairedpeople AT toqeermahmood cnnbasedobjectrecognitionandtrackingsystemtoassistvisuallyimpairedpeople AT muhammadtariqmahmood cnnbasedobjectrecognitionandtrackingsystemtoassistvisuallyimpairedpeople AT ikhyunlee cnnbasedobjectrecognitionandtrackingsystemtoassistvisuallyimpairedpeople |