Human Visual Perception Based Image Quality Assessment for Video Prediction

Author(s):  
JiWen Shi ◽  
Qiuguo Zhu ◽  
Yuanjie Chen ◽  
Jun Wu ◽  
Rong Xiong
Electronics ◽  
2020 ◽  
Vol 9 (2) ◽  
pp. 252 ◽  
Author(s):  
Xiaodi Guan ◽  
Fan Li ◽  
Lijun He

In this paper, we propose a no-reference image quality assessment (NR-IQA) approach towards authentically distorted images, based on expanding proxy labels. In order to distinguish from the human labels, we define the quality score, which is generated by using a traditional NR-IQA algorithm, as “proxy labels”. “Proxy” means that the objective results are obtained by computer after the extraction and assessment of the image features, instead of human judging. To solve the problem of limited image quality assessment (IQA) dataset size, we adopt a cascading transfer-learning method. First, we obtain large numbers of proxy labels which denote the quality score of authentically distorted images by using a traditional no-reference IQA method. Then the deep network is trained by the proxy labels, in order to learn IQA-related knowledge from the amounts of images with their scores. Ultimately, we use fine-tuning to inherit knowledge represented in the trained network. During the procedure, the mapping relationship fits in with human visual perception closer. The experimental results demonstrate that the proposed algorithm shows an outstanding performance as compared with the existing algorithms. On the LIVE In the Wild Image Quality Challenge database and KonIQ-10k database (two standard databases for authentically distorted image quality assessment), the algorithm realized good consistency between human visual perception and the predicted quality score of authentically distorted images.


2016 ◽  
Vol 16 (6) ◽  
pp. 316-325 ◽  
Author(s):  
Mariusz Oszust

Abstract The advances in the development of imaging devices resulted in the need of an automatic quality evaluation of displayed visual content in a way that is consistent with human visual perception. In this paper, an approach to full-reference image quality assessment (IQA) is proposed, in which several IQA measures, representing different approaches to modelling human visual perception, are efficiently combined in order to produce objective quality evaluation of examined images, which is highly correlated with evaluation provided by human subjects. In the paper, an optimisation problem of selection of several IQA measures for creating a regression-based IQA hybrid measure, or a multimeasure, is defined and solved using a genetic algorithm. Experimental evaluation on four largest IQA benchmarks reveals that the multimeasures obtained using the proposed approach outperform state-of-the-art full-reference IQA techniques, including other recently developed fusion approaches.


Symmetry ◽  
2019 ◽  
Vol 11 (12) ◽  
pp. 1494 ◽  
Author(s):  
Yueli Cui ◽  
Aihua Chen ◽  
Benquan Yang ◽  
Shiqing Zhang ◽  
Yang Wang

Compared with ordinary single exposure images, multi-exposure fusion (MEF) images are prone to color imbalance, detail information loss and abnormal exposure in the process of combining multiple images with different exposure levels. In this paper, we proposed a human visual perception-based multi-exposure fusion image quality assessment method by considering the related perceptual features (i.e., color, dense scale invariant feature transform (DSIFT) and exposure) to measure the quality degradation accurately, which is closely related to the symmetry principle in human eyes. Firstly, the L1 norm of chrominance components between fused images and the designed pseudo images with the most severe color attenuation is calculated to measure the global color degradation, and the color saturation similarity is added to eliminate the influence of color over-saturation. Secondly, a set of distorted images under different exposure levels with strong edge information of fused image is constructed through the structural transfer, thus DSIFT similarity and DSIFT saturation are computed to measure the local detail loss and enhancement, respectively. Thirdly, Gauss exposure function is used to detect the over-exposure or under-exposure areas, and the above perceptual features are aggregated with random forest to predict the final quality of fused image. Experimental results on a public MEF subjective assessment database show the superiority of the proposed method with the state-of-the-art image quality assessment models.


2020 ◽  
pp. 1-1
Author(s):  
Weiling Chen ◽  
Ke Gu ◽  
Tiesong Zhao ◽  
Gangyi Jiang ◽  
Patrick Le Callet

Author(s):  
Yuan-Yuan Fan ◽  
Ying-Jun Sang

On the basis of the research status of image quality comprehensive assessment, a no-reference image quality comprehensive assessment function model is proposed in this paper. First, the image quality is classified as contrast, sharpness, and signal-to-noise ratio (SNR), and the interrelation of each assessment index is researched and analyzed; second, the weights in the comprehensive assessment model are studied when only contrast, sharpness, and SNR are changed. Finally, on the basis of studying each kind of distortion separately, and considering the different types of image distortion, we studied how to determine the weights of each index in the comprehensive image quality assessment. The results show that the no-reference image quality comprehensive assessment function model proposed in this paper can better fit human visual perception, and it has a good correlation with Difference Mean Opinion Score (DMOS). Correlation Coefficient (CC) reached 0.8331, Spearman Rank Order Correlation Coefficient (SROCC) reached 0.8206, Mean Absolute Error (MAE) was only 0.0920, Root Mean Square Error (RMSE) was only 0.1122, Outlier Ratio (OR) was only 0.0365. The method proposed in this paper can be applied to photoelectric measurement equipment television system and give an accurate and reliable quality assessment to no reference television images.


Author(s):  
Biwei Chi ◽  
Mei Yu ◽  
Gangyi Jiang ◽  
Zhouyan He ◽  
Zongju Peng ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document