No reference video quality assessment with authentic distortions using 3-D deep convolutional neural network

Roger Gomez Nieto, Hernan Dario Benitez Restrepo, Roger Figueroa Quintero, Alan Bovik

Research output: Contribution to journalConference articlepeer-review

2 Scopus citations

Abstract

Video Quality Assessment (VQA) is an essential topic in several industries ranging from video streaming to camera manufacturing. In this paper, we present a novel method for No-Reference VQA. This framework is fast and does not require the extraction of hand-crafted features. We extracted convolutional features of 3-D C3D Convolutional Neural Network and feed one trained Support Vector Regressor to obtain a VQA score. We did certain transformations to different color spaces to generate better discriminant deep features. We extracted features from several layers, with and without overlap, finding the best configuration to improve the VQA score. We tested the proposed approach in LIVE-Qualcomm dataset. We extensively evaluated the perceptual quality prediction model, obtaining one final Pearson correlation of 0.7749±0.0884 with Mean Opinion Scores, and showed that it can achieve good video quality prediction, outperforming other state-of-the-art VQA leading models.

Original languageEnglish
Article number168
JournalIS and T International Symposium on Electronic Imaging Science and Technology
Volume2020
Issue number9
DOIs
StatePublished - 26 Jan 2020
Event17th Image Quality and System Performance Conference, IQSP 2020 - Burlingame, United States
Duration: 26 Jan 202030 Jan 2020

Fingerprint

Dive into the research topics of 'No reference video quality assessment with authentic distortions using 3-D deep convolutional neural network'. Together they form a unique fingerprint.

Cite this