Multi-Scale Feature-Guided Stereoscopic Video Quality Assessment Based on 3d Convolutional Neural Network

Author(s):  
Yingjie Feng ◽  
Sumei Li ◽  
Yongli Chang
2020 ◽  
Vol 2020 (9) ◽  
pp. 168-1-168-7
Author(s):  
Roger Gomez Nieto ◽  
Hernan Dario Benitez Restrepo ◽  
Roger Figueroa Quintero ◽  
Alan Bovik

Video Quality Assessment (VQA) is an essential topic in several industries ranging from video streaming to camera manufacturing. In this paper, we present a novel method for No-Reference VQA. This framework is fast and does not require the extraction of hand-crafted features. We extracted convolutional features of 3-D C3D Convolutional Neural Network and feed one trained Support Vector Regressor to obtain a VQA score. We did certain transformations to different color spaces to generate better discriminant deep features. We extracted features from several layers, with and without overlap, finding the best configuration to improve the VQA score. We tested the proposed approach in LIVE-Qualcomm dataset. We extensively evaluated the perceptual quality prediction model, obtaining one final Pearson correlation of 0:7749±0:0884 with Mean Opinion Scores, and showed that it can achieve good video quality prediction, outperforming other state-of-the-art VQA leading models.


2021 ◽  
Vol 1 (2) ◽  
pp. 14-22
Author(s):  
Xue Li ◽  
Jiali Qiu

As the rapid development of big data and the artificial intelligence technology, users prefer uploading more and more local files to the cloud server to reduce the pressure of local storage, but when users upload more and more duplicate files , not only wasting the network bandwidth, but also bringing much more inconvenience to the server management, especially images and videos. To solve the problems above, we design a multi-parameter video quality assessment model based on 3D convolutional neural network in the video deduplication system, we use a method similar to analytic hierarchy process to comprehensively evaluate the impact of packet loss rate, codec, frame rate, bit rate, resolution on video quality, and build a two-stream 3D convolutional neural network from the spatial flow and timing flow to capture the details of video distortion, set the coding layer to remove redundant distortion information. Finally, the LIVE and CSIQ data sets are used for experimental verification, we compare the performance of the proposed scheme with the V-BLIINDS scheme and VIDEO scheme under different packet loss rates. We also use the part of data set to simulate the interaction process between the client and the server, then test the time cost of the scheme. On the whole, the scheme proposed in this paper has a high quality assessment efficiency.


2015 ◽  
Vol 2 (3) ◽  
pp. 247-268
Author(s):  
Jos� Vin�ıcius de Miranda Cardoso ◽  
Carlos Danilo Miranda Regis ◽  
Marcelo Sampaio de Alencar ◽  
◽  
◽  
...  

2003 ◽  
Author(s):  
Susu Yao ◽  
Weisi Lin ◽  
Zhongkang Lu ◽  
EePing Ong ◽  
Xiao K. Yang

Sign in / Sign up

Export Citation Format

Share Document