Abstract
Video Quality Assessment (VQA) is an essential topic in several industries ranging from video streaming to camera manufacturing. In this paper, we present a novel method for No-Reference VQA. This framework is fast and does not require the extraction of hand-crafted features. We extracted convolutional features of 3-D C3D Convolutional Neural Network and feed one trained Support Vector Regressor to obtain a VQA score. We did certain transformations to different color spaces to generate better discriminant deep features. We extracted features from several layers, with and without overlap, finding the best configuration to improve the VQA score. We tested the proposed approach in LIVE-Qualcomm dataset. We extensively evaluated the perceptual quality prediction model, obtaining one final Pearson correlation of 0.7749±0.0884 with Mean Opinion Scores, and showed that it can achieve good video quality prediction, outperforming other state-of-the-art VQA leading models.
| Original language | English |
|---|---|
| Article number | 168 |
| Journal | IS and T International Symposium on Electronic Imaging Science and Technology |
| Volume | 2020 |
| Issue number | 9 |
| DOIs | |
| State | Published - 26 Jan 2020 |
| Event | 17th Image Quality and System Performance Conference, IQSP 2020 - Burlingame, United States Duration: 26 Jan 2020 → 30 Jan 2020 |
Fingerprint
Dive into the research topics of 'No reference video quality assessment with authentic distortions using 3-D deep convolutional neural network'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver