A novel NMF-based image quality assessment metric using extreme learning machine

Author(s):  
Shuigen Wang ◽  
Chenwei Deng ◽  
Weisi Lin ◽  
Guang-Bin Huang
2014 ◽  
Vol 2014 ◽  
pp. 1-7 ◽  
Author(s):  
Li Mao ◽  
Lidong Zhang ◽  
Xingyang Liu ◽  
Chaofeng Li ◽  
Hong Yang

Extreme learning machine (ELM) is a new class of single-hidden layer feedforward neural network (SLFN), which is simple in theory and fast in implementation. Zong et al. propose a weighted extreme learning machine for learning data with imbalanced class distribution, which maintains the advantages from original ELM. However, the current reported ELM and its improved version are only based on the empirical risk minimization principle, which may suffer from overfitting. To solve the overfitting troubles, in this paper, we incorporate the structural risk minimization principle into the (weighted) ELM, and propose a modified (weighted) extreme learning machine (M-ELM and M-WELM). Experimental results show that our proposed M-WELM outperforms the current reported extreme learning machine algorithm in image quality assessment.


2017 ◽  
Vol 47 (1) ◽  
pp. 232-243 ◽  
Author(s):  
Shuigen Wang ◽  
Chenwei Deng ◽  
Weisi Lin ◽  
Guang-Bin Huang ◽  
Baojun Zhao

2017 ◽  
Vol 14 (2) ◽  
pp. 172988141769462 ◽  
Author(s):  
Chenwei Deng ◽  
Zhen Li ◽  
Shuigen Wang ◽  
Xun Liu ◽  
Jiahui Dai

Multi-exposure image fusion is becoming increasingly influential in enhancing the quality of experience of consumer electronics. However, until now few works have been conducted on the performance evaluation of multi-exposure image fusion, especially colorful multi-exposure image fusion. Conventional quality assessment methods for multi-exposure image fusion mainly focus on grayscale information, while ignoring the color components, which also convey vital visual information. We propose an objective method for the quality assessment of colored multi-exposure image fusion based on image saturation, together with texture and structure similarities, which are able to measure the perceived color, texture, and structure information of fused images. The final image quality is predicted using an extreme learning machine with texture, structure, and saturation similarities as image features. Experimental results for a public multi-exposure image fusion database show that the proposed model can accurately predict colored multi-exposure image fusion image quality and correlates well with human perception. Compared with state-of-the-art image quality assessment models for image fusion, the proposed metric has better evaluation performance.


2011 ◽  
Vol 4 (4) ◽  
pp. 107-108
Author(s):  
Deepa Maria Thomas ◽  
◽  
S. John Livingston

2020 ◽  
Vol 2020 (9) ◽  
pp. 323-1-323-8
Author(s):  
Litao Hu ◽  
Zhenhua Hu ◽  
Peter Bauer ◽  
Todd J. Harris ◽  
Jan P. Allebach

Image quality assessment has been a very active research area in the field of image processing, and there have been numerous methods proposed. However, most of the existing methods focus on digital images that only or mainly contain pictures or photos taken by digital cameras. Traditional approaches evaluate an input image as a whole and try to estimate a quality score for the image, in order to give viewers an idea of how “good” the image looks. In this paper, we mainly focus on the quality evaluation of contents of symbols like texts, bar-codes, QR-codes, lines, and hand-writings in target images. Estimating a quality score for this kind of information can be based on whether or not it is readable by a human, or recognizable by a decoder. Moreover, we mainly study the viewing quality of the scanned document of a printed image. For this purpose, we propose a novel image quality assessment algorithm that is able to determine the readability of a scanned document or regions in a scanned document. Experimental results on some testing images demonstrate the effectiveness of our method.


2020 ◽  
Vol 64 (1) ◽  
pp. 10505-1-10505-16
Author(s):  
Yin Zhang ◽  
Xuehan Bai ◽  
Junhua Yan ◽  
Yongqi Xiao ◽  
C. R. Chatwin ◽  
...  

Abstract A new blind image quality assessment method called No-Reference Image Quality Assessment Based on Multi-Order Gradients Statistics is proposed, which is aimed at solving the problem that the existing no-reference image quality assessment methods cannot determine the type of image distortion and that the quality evaluation has poor robustness for different types of distortion. In this article, an 18-dimensional image feature vector is constructed from gradient magnitude features, relative gradient orientation features, and relative gradient magnitude features over two scales and three orders on the basis of the relationship between multi-order gradient statistics and the type and degree of image distortion. The feature matrix and distortion types of known distorted images are used to train an AdaBoost_BP neural network to determine the image distortion type; the feature matrix and subjective scores of known distorted images are used to train an AdaBoost_BP neural network to determine the image distortion degree. A series of comparative experiments were carried out using Laboratory of Image and Video Engineering (LIVE), LIVE Multiply Distorted Image Quality, Tampere Image, and Optics Remote Sensing Image databases. Experimental results show that the proposed method has high distortion type judgment accuracy and that the quality score shows good subjective consistency and robustness for all types of distortion. The performance of the proposed method is not constricted to a particular database, and the proposed method has high operational efficiency.


Sign in / Sign up

Export Citation Format

Share Document