No reference video quality assessment with authentic distortions using 3-D deep convolutional neural network

Roger Gomez Nieto, Hernan Dario Benitez Restrepo, Roger Figueroa Quintero, Alan Bovik

Producción: Contribución a una revistaArtículo de la conferenciarevisión exhaustiva

2 Citas (Scopus)

Resumen

Video Quality Assessment (VQA) is an essential topic in several industries ranging from video streaming to camera manufacturing. In this paper, we present a novel method for No-Reference VQA. This framework is fast and does not require the extraction of hand-crafted features. We extracted convolutional features of 3-D C3D Convolutional Neural Network and feed one trained Support Vector Regressor to obtain a VQA score. We did certain transformations to different color spaces to generate better discriminant deep features. We extracted features from several layers, with and without overlap, finding the best configuration to improve the VQA score. We tested the proposed approach in LIVE-Qualcomm dataset. We extensively evaluated the perceptual quality prediction model, obtaining one final Pearson correlation of 0.7749±0.0884 with Mean Opinion Scores, and showed that it can achieve good video quality prediction, outperforming other state-of-the-art VQA leading models.

Idioma originalInglés
Número de artículo168
PublicaciónIS and T International Symposium on Electronic Imaging Science and Technology
Volumen2020
N.º9
DOI
EstadoPublicada - 26 ene. 2020
Evento17th Image Quality and System Performance Conference, IQSP 2020 - Burlingame, Estados Unidos
Duración: 26 ene. 202030 ene. 2020

Huella

Profundice en los temas de investigación de 'No reference video quality assessment with authentic distortions using 3-D deep convolutional neural network'. En conjunto forman una huella única.

Citar esto