scholarly journals A New Blind Video Quality Metric for Assessing Different Turbulence Mitigation Algorithms

Electronics ◽  
2021 ◽  
Vol 10 (18) ◽  
pp. 2277
Author(s):  
Chiman Kwan ◽  
Bence Budavari

Although many algorithms have been proposed to mitigate air turbulence in optical videos, there do not seem to be consistent blind video quality assessment metrics that can reliably assess different approaches. Blind video quality assessment metrics are necessary because many videos containing air turbulence do not have ground truth. In this paper, a simple and intuitive blind video quality assessment metric is proposed. This metric can reliably and consistently assess various turbulent mitigation algorithms for optical videos. Experimental results using more than 10 videos in the literature show that the proposed metrics correlate well with human subjective evaluations. Compared with an existing blind video metric and two other blind image quality metrics, the proposed metrics performed consistently better.

Author(s):  
Subrahmanyam CH ◽  
Venkata Rao D ◽  
Usha Rani N

<p>In this work, we propose NRDPF-VQA (No Reference Distortion Patch Features Video Quality Assessment) model aims to use to measure the video quality assessment for H.264/AVC (Advanced Video Coding). The proposed method takes advantage of the contrast changes in the video quality by luminance changes. The proposed quality metric was tested by using LIVE video database. The experimental results show that the new index performance compared with the other NR-VQA models that require training on LIVE video databases, CSIQ video database, and VQEG HDTV video database. The values are compared with human score index analysis of DMOS.</p>


2014 ◽  
Vol 2014 ◽  
pp. 1-9 ◽  
Author(s):  
Milan Mirkovic ◽  
Petar Vrgovic ◽  
Dubravko Culibrk ◽  
Darko Stefanovic ◽  
Andras Anderla

Video quality as perceived by human observers is the ground truth when Video Quality Assessment (VQA) is in question. It is dependent on many variables, one of them being the content of the video that is being evaluated. Despite the evidence that content has an impact on the quality score the sequence receives from human evaluators, currently available VQA databases mostly comprise of sequences which fail to take this into account. In this paper, we aim to identify and analyze differences between human cognitive, affective, and conative responses to a set of videos commonly used for VQA and a set of videos specifically chosen to include video content which might affect the judgment of evaluators when perceived video quality is in question. Our findings indicate that considerable differences exist between the two sets on selected factors, which leads us to conclude that videos starring a different type of content than the currently employed ones might be more appropriate for VQA.


Sign in / Sign up

Export Citation Format

Share Document