Reconstructing unreadable QR codes: a deep learning based super resolution strategy

Quick-response (QR) codes have become an integral component of the digital transformation process, facilitating fast and secure information sharing across various sectors. However, factors such as low resolution, misalignment, panning and rotation, often caused by the limitations of scanning devices...

Full description

Saved in:
Bibliographic Details
Main Author: Yasin Sancar
Format: Article
Language:English
Published: PeerJ Inc. 2025-04-01
Series:PeerJ Computer Science
Subjects:
Online Access:https://peerj.com/articles/cs-2841.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1850197695010963456
author Yasin Sancar
author_facet Yasin Sancar
author_sort Yasin Sancar
collection DOAJ
description Quick-response (QR) codes have become an integral component of the digital transformation process, facilitating fast and secure information sharing across various sectors. However, factors such as low resolution, misalignment, panning and rotation, often caused by the limitations of scanning devices, can significantly impact their readability. These distortions prevent reliable extraction of embedded data, increase processing times and pose potential security risks. In this study, four super-resolution models Enhanced Deep Super Resolution (ESDR) network, Very Deep Super Resolution (VDSR) network, Efficient Sub-Pixel Convolutional Network (ESPCN) and Super Resolution Convolutional Neural Network (SRCNN) are used to mitigate resolution loss, rotation errors and misalignment issues. To simulate scanner-induced distortions, a dataset of 16,000 computer-generated QR codes with various filters was used. In addition, super-resolution models were applied to 4,593 QR codes that OpenCV’s QRCodeDetector function could not decode in real-world scans. The results showed that EDSR, VDSR, ESPCN and SRCNN successfully read 4,261, 4,229, 4,255 and 4,042 of these QR codes, respectively. Furthermore, the EDSR, VDSR, ESPCN and SRCNN models trained by OpenCV’s deep learning-based WeChat QR Code Detector function to read 2,899 QR codes that were initially unreadable and simulated on the computer were able to successfully read 2,891, 2,884, 2,433 and 2,560 of them, respectively. These findings show that super-resolution models can effectively improve the readability of degraded or low-resolution QR codes.
format Article
id doaj-art-e87d016c36364c92a386d1e46f13e430
institution OA Journals
issn 2376-5992
language English
publishDate 2025-04-01
publisher PeerJ Inc.
record_format Article
series PeerJ Computer Science
spelling doaj-art-e87d016c36364c92a386d1e46f13e4302025-08-20T02:13:03ZengPeerJ Inc.PeerJ Computer Science2376-59922025-04-0111e284110.7717/peerj-cs.2841Reconstructing unreadable QR codes: a deep learning based super resolution strategyYasin SancarQuick-response (QR) codes have become an integral component of the digital transformation process, facilitating fast and secure information sharing across various sectors. However, factors such as low resolution, misalignment, panning and rotation, often caused by the limitations of scanning devices, can significantly impact their readability. These distortions prevent reliable extraction of embedded data, increase processing times and pose potential security risks. In this study, four super-resolution models Enhanced Deep Super Resolution (ESDR) network, Very Deep Super Resolution (VDSR) network, Efficient Sub-Pixel Convolutional Network (ESPCN) and Super Resolution Convolutional Neural Network (SRCNN) are used to mitigate resolution loss, rotation errors and misalignment issues. To simulate scanner-induced distortions, a dataset of 16,000 computer-generated QR codes with various filters was used. In addition, super-resolution models were applied to 4,593 QR codes that OpenCV’s QRCodeDetector function could not decode in real-world scans. The results showed that EDSR, VDSR, ESPCN and SRCNN successfully read 4,261, 4,229, 4,255 and 4,042 of these QR codes, respectively. Furthermore, the EDSR, VDSR, ESPCN and SRCNN models trained by OpenCV’s deep learning-based WeChat QR Code Detector function to read 2,899 QR codes that were initially unreadable and simulated on the computer were able to successfully read 2,891, 2,884, 2,433 and 2,560 of them, respectively. These findings show that super-resolution models can effectively improve the readability of degraded or low-resolution QR codes.https://peerj.com/articles/cs-2841.pdfQR code readingSuper-resolutionSRCNNEPSCNEDSRVDSR
spellingShingle Yasin Sancar
Reconstructing unreadable QR codes: a deep learning based super resolution strategy
PeerJ Computer Science
QR code reading
Super-resolution
SRCNN
EPSCN
EDSR
VDSR
title Reconstructing unreadable QR codes: a deep learning based super resolution strategy
title_full Reconstructing unreadable QR codes: a deep learning based super resolution strategy
title_fullStr Reconstructing unreadable QR codes: a deep learning based super resolution strategy
title_full_unstemmed Reconstructing unreadable QR codes: a deep learning based super resolution strategy
title_short Reconstructing unreadable QR codes: a deep learning based super resolution strategy
title_sort reconstructing unreadable qr codes a deep learning based super resolution strategy
topic QR code reading
Super-resolution
SRCNN
EPSCN
EDSR
VDSR
url https://peerj.com/articles/cs-2841.pdf
work_keys_str_mv AT yasinsancar reconstructingunreadableqrcodesadeeplearningbasedsuperresolutionstrategy