Lip print‐based identification using traditional and deep learning
Abstract The concept of biometric identification is centred around the theory that every individual is unique and has distinct characteristics. Various metrics such as fingerprint, face, iris, or retina are adopted for this purpose. Nonetheless, new alternatives are needed to establish the identity...
Saved in:
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
Wiley
2023-01-01
|
Series: | IET Biometrics |
Subjects: | |
Online Access: | https://doi.org/10.1049/bme2.12073 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1832546700382699520 |
---|---|
author | Wardah Farrukh Dustin van derHaar |
author_facet | Wardah Farrukh Dustin van derHaar |
author_sort | Wardah Farrukh |
collection | DOAJ |
description | Abstract The concept of biometric identification is centred around the theory that every individual is unique and has distinct characteristics. Various metrics such as fingerprint, face, iris, or retina are adopted for this purpose. Nonetheless, new alternatives are needed to establish the identity of individuals on occasions where the above techniques are unavailable. One emerging method of human recognition is lip‐based identification. It can be treated as a new kind of biometric measure. The patterns found on the human lip are permanent unless subjected to alternations or trauma. Therefore, lip prints can serve the purpose of confirming an individual's identity. The main objective of this work is to design experiments using computer vision methods that can recognise an individual solely based on their lip prints. This article compares traditional and deep learning computer vision methods and how they perform on a common dataset for lip‐based identification. The first pipeline is a traditional method with Speeded Up Robust Features with either an SVM or K‐NN machine learning classifier, which achieved an accuracy of 95.45% and 94.31%, respectively. A second pipeline compares the performance of the VGG16 and VGG19 deep learning architectures. This approach obtained an accuracy of 91.53% and 93.22%, respectively. |
format | Article |
id | doaj-art-d97c939aa34942df967c6bb7ce21f367 |
institution | Kabale University |
issn | 2047-4938 2047-4946 |
language | English |
publishDate | 2023-01-01 |
publisher | Wiley |
record_format | Article |
series | IET Biometrics |
spelling | doaj-art-d97c939aa34942df967c6bb7ce21f3672025-02-03T06:47:27ZengWileyIET Biometrics2047-49382047-49462023-01-0112111210.1049/bme2.12073Lip print‐based identification using traditional and deep learningWardah Farrukh0Dustin van derHaar1Academy of Computer Science and Software Engineering University of Johannesburg Johannesburg Gauteng South AfricaAcademy of Computer Science and Software Engineering University of Johannesburg Johannesburg Gauteng South AfricaAbstract The concept of biometric identification is centred around the theory that every individual is unique and has distinct characteristics. Various metrics such as fingerprint, face, iris, or retina are adopted for this purpose. Nonetheless, new alternatives are needed to establish the identity of individuals on occasions where the above techniques are unavailable. One emerging method of human recognition is lip‐based identification. It can be treated as a new kind of biometric measure. The patterns found on the human lip are permanent unless subjected to alternations or trauma. Therefore, lip prints can serve the purpose of confirming an individual's identity. The main objective of this work is to design experiments using computer vision methods that can recognise an individual solely based on their lip prints. This article compares traditional and deep learning computer vision methods and how they perform on a common dataset for lip‐based identification. The first pipeline is a traditional method with Speeded Up Robust Features with either an SVM or K‐NN machine learning classifier, which achieved an accuracy of 95.45% and 94.31%, respectively. A second pipeline compares the performance of the VGG16 and VGG19 deep learning architectures. This approach obtained an accuracy of 91.53% and 93.22%, respectively.https://doi.org/10.1049/bme2.12073access controlbiometricslip print identification |
spellingShingle | Wardah Farrukh Dustin van derHaar Lip print‐based identification using traditional and deep learning IET Biometrics access control biometrics lip print identification |
title | Lip print‐based identification using traditional and deep learning |
title_full | Lip print‐based identification using traditional and deep learning |
title_fullStr | Lip print‐based identification using traditional and deep learning |
title_full_unstemmed | Lip print‐based identification using traditional and deep learning |
title_short | Lip print‐based identification using traditional and deep learning |
title_sort | lip print based identification using traditional and deep learning |
topic | access control biometrics lip print identification |
url | https://doi.org/10.1049/bme2.12073 |
work_keys_str_mv | AT wardahfarrukh lipprintbasedidentificationusingtraditionalanddeeplearning AT dustinvanderhaar lipprintbasedidentificationusingtraditionalanddeeplearning |