A Very Fast and Accurate Image Quality Assessment Method based on Mean Squared Error with Difference of Gaussians

2020 ◽  
Vol 64 (1) ◽  
pp. 10502-1-10502-5
Author(s):  
Sung-Ho Bae ◽  
Seong-Bae Park

Abstract Mean squared error (MSE) has long been the most useful objective image quality assessment (IQA) metric due to its mathematical tractability and computational simplicity, although it has shown poor correlations with the perceived visual quality for distorted images. Contrary to the MSE, recent IQA methods are more closely related with measured visual quality. However, their applications are somewhat limited due to their heavy computational costs and inapplicability in optimization process. In order to develop a better IQA method that will be closer to the perceived visual quality, the authors aimed to incorporate simple yet powerful linear features into the form of MSE while retaining the advantages of computational simplicity and desirable mathematical properties of MSE. Through comprehensive experiments, the authors found that Difference of Gaussians (DoG) kernel significantly improves the prediction performance while keeping the aforementioned advantages in the form of MSE. The proposed method performs better as the DoG filtering well approximates the behaviors of neural response functions in the visual cortex of the human visual system, thus extracting perceptually important features. At the same time, it holds the computational simplicity and mathematical properties of MSE since DoG is a very simple linear kernel. Their extensive experiments showed that the proposed method provides competitive prediction performance to the recent IQA methods with a significantly lower computational complexity.

Atmosphere ◽  
2019 ◽  
Vol 10 (5) ◽  
pp. 244 ◽  
Author(s):  
Quang-Khai Tran ◽  
Sa-kwang Song

This paper presents a viewpoint from computer vision to the radar echo extrapolation task in the precipitation nowcasting domain. Inspired by the success of some convolutional recurrent neural network models in this domain, including convolutional LSTM, convolutional GRU and trajectory GRU, we designed a new sequence-to-sequence neural network structure to leverage these models in a realistic data context. In this design, we decreased the numbers of channels in high abstract recurrent layers rather than increasing them. We formulated the task as a problem of encoding five radar images and predicting 10 steps ahead at the pixel level, and found that using only the common mean squared error can misguide the training and mislead the testing. Especially, the image quality of last predictions usually degraded rapidly. As a solution, we employed some visual image quality assessment techniques including Structural Similarity (SSIM) and multi-scale SSIM to train our models. Experimental results show that our structure was more tolerant to increasing uncertainty in the data, and the use of image quality metrics can significantly reduce the blurry image issue. Moreover, we found that using SSIM was very effective and a combination of SSIM with mean squared error and mean absolute error yielded the best prediction quality.


Quality Assessment (IQA) by using mathematical methods is offering favorable results in calculating visual quality of distorted images. These methods are developed by examining effective features that are compatible with characteristics of Human Visual System (HVS). But many of those methods are difficult to apply for optimization problems. This paper presents DCT based metric with easy implementation and having mathematical properties like differentiability, convexity and valid distance metricability to overcome the optimization problems. By using this method we are able to calculate the quality of image as a whole as well as the quality of local image regions.


2021 ◽  
Vol 6 (2) ◽  
pp. 140-145
Author(s):  
Mykola Maksymiv ◽  
◽  
Taras Rak

Contrast enhancement is a technique for increasing the contrast of an image to obtain better image quality. As many existing contrast enhancement algorithms typically add too much contrast to an image, maintaining visual quality should be considered as a part of enhancing image contrast. This paper focuses on a contrast enhancement method that is based on histogram transformations to improve contrast and uses image quality assessment to automatically select the optimal target histogram. Improvements in contrast and preservation of visual quality are taken into account in the target histogram, so this method avoids the problem of excessive increase in contrast. In the proposed method, the optimal target histogram is the weighted sum of the original histogram, homogeneous histogram and Gaussian histogram. Structural and statistical metrics of “naturalness of the image” are used to determine the weights of the corresponding histograms. Contrast images are obtained by matching the optimal target histogram. Experiments show that the proposed method gives better results compared to other existing algorithms for increasing contrast based on the transformation of histograms.


Algorithms ◽  
2020 ◽  
Vol 13 (12) ◽  
pp. 313
Author(s):  
Domonkos Varga

The goal of full-reference image quality assessment (FR-IQA) is to predict the perceptual quality of an image as perceived by human observers using its pristine (distortion free) reference counterpart. In this study, we explore a novel, combined approach which predicts the perceptual quality of a distorted image by compiling a feature vector from convolutional activation maps. More specifically, a reference-distorted image pair is run through a pretrained convolutional neural network and the activation maps are compared with a traditional image similarity metric. Subsequently, the resulting feature vector is mapped onto perceptual quality scores with the help of a trained support vector regressor. A detailed parameter study is also presented in which the design choices of the proposed method is explained. Furthermore, we study the relationship between the amount of training images and the prediction performance. Specifically, it is demonstrated that the proposed method can be trained with a small amount of data to reach high prediction performance. Our best proposal—called ActMapFeat—is compared to the state-of-the-art on six publicly available benchmark IQA databases, such as KADID-10k, TID2013, TID2008, MDID, CSIQ, and VCL-FER. Specifically, our method is able to significantly outperform the state-of-the-art on these benchmark databases.


2020 ◽  
Vol 5 (1) ◽  
Author(s):  
Hui Men ◽  
Vlad Hosu ◽  
Hanhe Lin ◽  
Andrés Bruhn ◽  
Dietmar Saupe

Abstract Current benchmarks for optical flow algorithms evaluate the estimation either directly by comparing the predicted flow fields with the ground truth or indirectly by using the predicted flow fields for frame interpolation and then comparing the interpolated frames with the actual frames. In the latter case, objective quality measures such as the mean squared error are typically employed. However, it is well known that for image quality assessment, the actual quality experienced by the user cannot be fully deduced from such simple measures. Hence, we conducted a subjective quality assessment crowdscouring study for the interpolated frames provided by one of the optical flow benchmarks, the Middlebury benchmark. It contains interpolated frames from 155 methods applied to each of 8 contents. For this purpose, we collected forced-choice paired comparisons between interpolated images and corresponding ground truth. To increase the sensitivity of observers when judging minute difference in paired comparisons we introduced a new method to the field of full-reference quality assessment, called artefact amplification. From the crowdsourcing data (3720 comparisons of 20 votes each) we reconstructed absolute quality scale values according to Thurstone’s model. As a result, we obtained a re-ranking of the 155 participating algorithms w.r.t. the visual quality of the interpolated frames. This re-ranking not only shows the necessity of visual quality assessment as another evaluation metric for optical flow and frame interpolation benchmarks, the results also provide the ground truth for designing novel image quality assessment (IQA) methods dedicated to perceptual quality of interpolated images. As a first step, we proposed such a new full-reference method, called WAE-IQA, which weights the local differences between an interpolated image and its ground truth.


2011 ◽  
Vol 4 (4) ◽  
pp. 107-108
Author(s):  
Deepa Maria Thomas ◽  
◽  
S. John Livingston

Sign in / Sign up

Export Citation Format

Share Document