Deep attributes and decisions fusion for no-reference video quality analysis

Video Quality Assessment (VQA) is a critical component of various technologies, including automated video broadcasting through displaying technologies. Moreover, determining visual quality necessitates a balanced examination of visual features and functionality. Previous research has also shown that...

Full description

Saved in:
Bibliographic Details
Main Author: Adil Baig
Format: Article
Language:English
Published: REA Press 2023-09-01
Series:Big Data and Computing Visions
Subjects:
Online Access:https://www.bidacv.com/article_189314_c7555c86e1c77fa6447486b5bb4b547c.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1832579299789504512
author Adil Baig
author_facet Adil Baig
author_sort Adil Baig
collection DOAJ
description Video Quality Assessment (VQA) is a critical component of various technologies, including automated video broadcasting through displaying technologies. Moreover, determining visual quality necessitates a balanced examination of visual features and functionality. Previous research has also shown that features derived from pre-trained models of Convolutional Neural Networks (CNNs) are extremely useful in various image analysis and computer vision activities. Based on characteristics collected from pre-trained models of deep neural networks, transfer learning, periodic pooling, and regression, we created a unique architecture for No Reference Video Quality Assessment (NR-VQA) in this research. We were able to get results by solely employing dynamically pooled deep features and avoiding the use of manually produced features. This study describes a novel, deep learning-based strategy for NR-VQA that uses several pre-trained deep neural networks to characterize probable image and video distortions across parallel. A set of pre-trained CNNs extract spatially pooling and intensity-adjusted video-level feature representations, which are then individually mapped onto subjective peer assessments. Ultimately, the perceived quality of a video series is calculated by combining the quality standards from the various regressors. Numerous researches demonstrate that the suggested approach on two large baseline video quality analysis datasets with realistic aberrations sets a new state-of-the-art. Furthermore, the findings show that combining the decisions of different deep networks can greatly improve NR-VQA.
format Article
id doaj-art-9031410cfd634d40977bb8fc888fb07b
institution Kabale University
issn 2783-4956
2821-014X
language English
publishDate 2023-09-01
publisher REA Press
record_format Article
series Big Data and Computing Visions
spelling doaj-art-9031410cfd634d40977bb8fc888fb07b2025-01-30T12:23:01ZengREA PressBig Data and Computing Visions2783-49562821-014X2023-09-01339110310.22105/bdcv.2023.415895.1165189314Deep attributes and decisions fusion for no-reference video quality analysisAdil Baig0University of Agriculture, Pakistan.Video Quality Assessment (VQA) is a critical component of various technologies, including automated video broadcasting through displaying technologies. Moreover, determining visual quality necessitates a balanced examination of visual features and functionality. Previous research has also shown that features derived from pre-trained models of Convolutional Neural Networks (CNNs) are extremely useful in various image analysis and computer vision activities. Based on characteristics collected from pre-trained models of deep neural networks, transfer learning, periodic pooling, and regression, we created a unique architecture for No Reference Video Quality Assessment (NR-VQA) in this research. We were able to get results by solely employing dynamically pooled deep features and avoiding the use of manually produced features. This study describes a novel, deep learning-based strategy for NR-VQA that uses several pre-trained deep neural networks to characterize probable image and video distortions across parallel. A set of pre-trained CNNs extract spatially pooling and intensity-adjusted video-level feature representations, which are then individually mapped onto subjective peer assessments. Ultimately, the perceived quality of a video series is calculated by combining the quality standards from the various regressors. Numerous researches demonstrate that the suggested approach on two large baseline video quality analysis datasets with realistic aberrations sets a new state-of-the-art. Furthermore, the findings show that combining the decisions of different deep networks can greatly improve NR-VQA.https://www.bidacv.com/article_189314_c7555c86e1c77fa6447486b5bb4b547c.pdfvideo quality assessmentno reference video quality assessmentdeep neural networks
spellingShingle Adil Baig
Deep attributes and decisions fusion for no-reference video quality analysis
Big Data and Computing Visions
video quality assessment
no reference video quality assessment
deep neural networks
title Deep attributes and decisions fusion for no-reference video quality analysis
title_full Deep attributes and decisions fusion for no-reference video quality analysis
title_fullStr Deep attributes and decisions fusion for no-reference video quality analysis
title_full_unstemmed Deep attributes and decisions fusion for no-reference video quality analysis
title_short Deep attributes and decisions fusion for no-reference video quality analysis
title_sort deep attributes and decisions fusion for no reference video quality analysis
topic video quality assessment
no reference video quality assessment
deep neural networks
url https://www.bidacv.com/article_189314_c7555c86e1c77fa6447486b5bb4b547c.pdf
work_keys_str_mv AT adilbaig deepattributesanddecisionsfusionfornoreferencevideoqualityanalysis