Precise No-Reference Image Quality Evaluation Based on Distortion Identification

Author(s):  
Chenggang Yan ◽  
Tong Teng ◽  
Yutao Liu ◽  
Yongbing Zhang ◽  
Haoqian Wang ◽  
...  

The difficulty of no-reference image quality assessment (NR IQA) often lies in the lack of knowledge about the distortion in the image, which makes quality assessment blind and thus inefficient. To tackle such issue, in this article, we propose a novel scheme for precise NR IQA, which includes two successive steps, i.e., distortion identification and targeted quality evaluation. In the first step, we employ the well-known Inception-ResNet-v2 neural network to train a classifier that classifies the possible distortion in the image into the four most common distortion types, i.e., Gaussian white noise (WN), Gaussian blur (GB), jpeg compression (JPEG), and jpeg2000 compression (JP2K). Specifically, the deep neural network is trained on the large-scale Waterloo Exploration database, which ensures the robustness and high performance of distortion classification. In the second step, after determining the distortion type of the image, we then design a specific approach to quantify the image distortion level, which can estimate the image quality specially and more precisely. Extensive experiments performed on LIVE, TID2013, CSIQ, and Waterloo Exploration databases demonstrate that (1) the accuracy of our distortion classification is higher than that of the state-of-the-art distortion classification methods, and (2) the proposed NR IQA method outperforms the state-of-the-art NR IQA methods in quantifying the image quality.

2020 ◽  
Vol 64 (1) ◽  
pp. 10505-1-10505-16
Author(s):  
Yin Zhang ◽  
Xuehan Bai ◽  
Junhua Yan ◽  
Yongqi Xiao ◽  
C. R. Chatwin ◽  
...  

Abstract A new blind image quality assessment method called No-Reference Image Quality Assessment Based on Multi-Order Gradients Statistics is proposed, which is aimed at solving the problem that the existing no-reference image quality assessment methods cannot determine the type of image distortion and that the quality evaluation has poor robustness for different types of distortion. In this article, an 18-dimensional image feature vector is constructed from gradient magnitude features, relative gradient orientation features, and relative gradient magnitude features over two scales and three orders on the basis of the relationship between multi-order gradient statistics and the type and degree of image distortion. The feature matrix and distortion types of known distorted images are used to train an AdaBoost_BP neural network to determine the image distortion type; the feature matrix and subjective scores of known distorted images are used to train an AdaBoost_BP neural network to determine the image distortion degree. A series of comparative experiments were carried out using Laboratory of Image and Video Engineering (LIVE), LIVE Multiply Distorted Image Quality, Tampere Image, and Optics Remote Sensing Image databases. Experimental results show that the proposed method has high distortion type judgment accuracy and that the quality score shows good subjective consistency and robustness for all types of distortion. The performance of the proposed method is not constricted to a particular database, and the proposed method has high operational efficiency.


Sensors ◽  
2020 ◽  
Vol 20 (22) ◽  
pp. 6457
Author(s):  
Hayat Ullah ◽  
Muhammad Irfan ◽  
Kyungjin Han ◽  
Jong Weon Lee

Due to recent advancements in virtual reality (VR) and augmented reality (AR), the demand for high quality immersive contents is a primary concern for production companies and consumers. Similarly, the topical record-breaking performance of deep learning in various domains of artificial intelligence has extended the attention of researchers to contribute to different fields of computer vision. To ensure the quality of immersive media contents using these advanced deep learning technologies, several learning based Stitched Image Quality Assessment methods have been proposed with reasonable performances. However, these methods are unable to localize, segment, and extract the stitching errors in panoramic images. Further, these methods used computationally complex procedures for quality assessment of panoramic images. With these motivations, in this paper, we propose a novel three-fold Deep Learning based No-Reference Stitched Image Quality Assessment (DLNR-SIQA) approach to evaluate the quality of immersive contents. In the first fold, we fined-tuned the state-of-the-art Mask R-CNN (Regional Convolutional Neural Network) on manually annotated various stitching error-based cropped images from the two publicly available datasets. In the second fold, we segment and localize various stitching errors present in the immersive contents. Finally, based on the distorted regions present in the immersive contents, we measured the overall quality of the stitched images. Unlike existing methods that only measure the quality of the images using deep features, our proposed method can efficiently segment and localize stitching errors and estimate the image quality by investigating segmented regions. We also carried out extensive qualitative and quantitative comparison with full reference image quality assessment (FR-IQA) and no reference image quality assessment (NR-IQA) on two publicly available datasets, where the proposed system outperformed the existing state-of-the-art techniques.


Author(s):  
Edwin Sybingco ◽  
◽  
Elmer P. Dadios

One of the challenges in image quality assessment (IQA) is to determine the quality score without the presence of the reference image. In this paper, the authors proposed a no-reference image quality assessment method based on the natural statistics of double-opponent (DO) cells. It utilizes the statistical modeling of the three opponency channels using the generalized Gaussian distribution (GGD) and asymmetric generalized Gaussian distribution (AGGD). The parameters of GGD and AGGD are then applied to feedforward neural network to predict the image quality. Result shows that for any opposing channels, its natural statistics parameters when applied to feedforward neural network can achieve satisfactory prediction of image quality.


Algorithms ◽  
2020 ◽  
Vol 13 (12) ◽  
pp. 313
Author(s):  
Domonkos Varga

The goal of full-reference image quality assessment (FR-IQA) is to predict the perceptual quality of an image as perceived by human observers using its pristine (distortion free) reference counterpart. In this study, we explore a novel, combined approach which predicts the perceptual quality of a distorted image by compiling a feature vector from convolutional activation maps. More specifically, a reference-distorted image pair is run through a pretrained convolutional neural network and the activation maps are compared with a traditional image similarity metric. Subsequently, the resulting feature vector is mapped onto perceptual quality scores with the help of a trained support vector regressor. A detailed parameter study is also presented in which the design choices of the proposed method is explained. Furthermore, we study the relationship between the amount of training images and the prediction performance. Specifically, it is demonstrated that the proposed method can be trained with a small amount of data to reach high prediction performance. Our best proposal—called ActMapFeat—is compared to the state-of-the-art on six publicly available benchmark IQA databases, such as KADID-10k, TID2013, TID2008, MDID, CSIQ, and VCL-FER. Specifically, our method is able to significantly outperform the state-of-the-art on these benchmark databases.


Symmetry ◽  
2019 ◽  
Vol 11 (3) ◽  
pp. 296 ◽  
Author(s):  
Md. Layek ◽  
A. Uddin ◽  
Tuyen Le ◽  
TaeChoong Chung ◽  
Eui-Nam Huh

Objective image quality assessment (IQA) is imperative in the current multimedia-intensive world, in order to assess the visual quality of an image at close to a human level of ability. Many parameters such as color intensity, structure, sharpness, contrast, presence of an object, etc., draw human attention to an image. Psychological vision research suggests that human vision is biased to the center area of an image and display screen. As a result, if the center part contains any visually salient information, it draws human attention even more and any distortion in that part will be better perceived than other parts. To the best of our knowledge, previous IQA methods have not considered this fact. In this paper, we propose a full reference image quality assessment (FR-IQA) approach using visual saliency and contrast; however, we give extra attention to the center by increasing the sensitivity of the similarity maps in that region. We evaluated our method on three large-scale popular benchmark databases used by most of the current IQA researchers (TID2008, CSIQ and LIVE), having a total of 3345 distorted images with 28 different kinds of distortions. Our method is compared with 13 state-of-the-art approaches. This comparison reveals the stronger correlation of our method with human-evaluated values. The prediction-of-quality score is consistent for distortion specific as well as distortion independent cases. Moreover, faster processing makes it applicable to any real-time application. The MATLAB code is publicly available to test the algorithm and can be found online at.


Sign in / Sign up

Export Citation Format

Share Document