No-reference image visual quality assessment using nonlinear regression

Author(s):  
Martin D. Dimitrievski ◽  
Zoran A. Ivanovski ◽  
Tomislav P. Kartalov
2012 ◽  
Author(s):  
Nikolay N. Ponomarenko ◽  
Vladimir V. Lukin ◽  
Oleg I. Eremeev ◽  
Karen O. Egiazarian ◽  
Jaakko T. Astola

Sensors ◽  
2021 ◽  
Vol 22 (1) ◽  
pp. 175
Author(s):  
Ghislain Takam Tchendjou ◽  
Emmanuel Simeu

This paper presents the construction of a new objective method for estimation of visual perceiving quality. The proposal provides an assessment of image quality without the need for a reference image or a specific distortion assumption. Two main processes have been used to build our models: The first one uses deep learning with a convolutional neural network process, without any preprocessing. The second objective visual quality is computed by pooling several image features extracted from different concepts: the natural scene statistic in the spatial domain, the gradient magnitude, the Laplacian of Gaussian, as well as the spectral and spatial entropies. The features extracted from the image file are used as the input of machine learning techniques to build the models that are used to estimate the visual quality level of any image. For the machine learning training phase, two main processes are proposed: The first proposed process consists of a direct learning using all the selected features in only one training phase, named direct learning blind visual quality assessment DLBQA. The second process is an indirect learning and consists of two training phases, named indirect learning blind visual quality assessment ILBQA. This second process includes an additional phase of construction of intermediary metrics used for the construction of the prediction model. The produced models are evaluated on many benchmarks image databases as TID2013, LIVE, and LIVE in the wild image quality challenge. The experimental results demonstrate that the proposed models produce the best visual perception quality prediction, compared to the state-of-the-art models. The proposed models have been implemented on an FPGA platform to demonstrate the feasibility of integrating the proposed solution on an image sensor.


IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Ilyass Abouelaziz ◽  
Aladine Chetouani ◽  
Mohammed El Hassouni ◽  
Hocine Cherifi ◽  
Longin Jan Latecki

Author(s):  
Emilie Bosc ◽  
Patrick Le Callet ◽  
Luce Morin ◽  
Muriel Pressigout

2017 ◽  
Vol 63 (1) ◽  
pp. 71-81 ◽  
Author(s):  
Min Liu ◽  
Ke Gu ◽  
Guangtao Zhai ◽  
Patrick Le Callet ◽  
Wenjun Zhang

2007 ◽  
Vol 43 (21) ◽  
pp. 1134 ◽  
Author(s):  
D.-O. Kim ◽  
R.-H. Park ◽  
D.-G. Sim

Sign in / Sign up

Export Citation Format

Share Document