Perceptual Quality Assessment of Low-light Image Enhancement

Author(s):  
Guangtao Zhai ◽  
Wei Sun ◽  
Xiongkuo Min ◽  
Jiantao Zhou

Low-light image enhancement algorithms (LIEA) can light up images captured in dark or back-lighting conditions. However, LIEA may introduce various distortions such as structure damage, color shift, and noise into the enhanced images. Despite various LIEAs proposed in the literature, few efforts have been made to study the quality evaluation of low-light enhancement. In this article, we make one of the first attempts to investigate the quality assessment problem of low-light image enhancement. To facilitate the study of objective image quality assessment (IQA), we first build a large-scale low-light image enhancement quality (LIEQ) database. The LIEQ database includes 1,000 light-enhanced images, which are generated from 100 low-light images using 10 LIEAs. Rather than evaluating the quality of light-enhanced images directly, which is more difficult, we propose to use the multi-exposure fused (MEF) image and stack-based high dynamic range (HDR) image as a reference and evaluate the quality of low-light enhancement following a full-reference (FR) quality assessment routine. We observe that distortions introduced in low-light enhancement are significantly different from distortions considered in traditional image IQA databases that are well-studied, and the current state-of-the-art FR IQA models are also not suitable for evaluating their quality. Therefore, we propose a new FR low-light image enhancement quality assessment (LIEQA) index by evaluating the image quality from four aspects: luminance enhancement, color rendition, noise evaluation, and structure preserving, which have captured the most key aspects of low-light enhancement. Experimental results on the LIEQ database show that the proposed LIEQA index outperforms the state-of-the-art FR IQA models. LIEQA can act as an evaluator for various low-light enhancement algorithms and systems. To the best of our knowledge, this article is the first of its kind comprehensive low-light image enhancement quality assessment study.

Algorithms ◽  
2020 ◽  
Vol 13 (12) ◽  
pp. 313
Author(s):  
Domonkos Varga

The goal of full-reference image quality assessment (FR-IQA) is to predict the perceptual quality of an image as perceived by human observers using its pristine (distortion free) reference counterpart. In this study, we explore a novel, combined approach which predicts the perceptual quality of a distorted image by compiling a feature vector from convolutional activation maps. More specifically, a reference-distorted image pair is run through a pretrained convolutional neural network and the activation maps are compared with a traditional image similarity metric. Subsequently, the resulting feature vector is mapped onto perceptual quality scores with the help of a trained support vector regressor. A detailed parameter study is also presented in which the design choices of the proposed method is explained. Furthermore, we study the relationship between the amount of training images and the prediction performance. Specifically, it is demonstrated that the proposed method can be trained with a small amount of data to reach high prediction performance. Our best proposal—called ActMapFeat—is compared to the state-of-the-art on six publicly available benchmark IQA databases, such as KADID-10k, TID2013, TID2008, MDID, CSIQ, and VCL-FER. Specifically, our method is able to significantly outperform the state-of-the-art on these benchmark databases.


2020 ◽  
Vol 20 (1) ◽  
Author(s):  
Rafal Obuchowicz ◽  
Mariusz Oszust ◽  
Adam Piorkowski

Abstract Background The perceptual quality of magnetic resonance (MR) images influences diagnosis and may compromise the treatment. The purpose of this study was to evaluate how the image quality changes influence the interobserver variability of their assessment. Methods For the variability evaluation, a dataset containing distorted MRI images was prepared and then assessed by 31 experienced medical professionals (radiologists). Differences between observers were analyzed using the Fleiss’ kappa. However, since the kappa evaluates the agreement among radiologists taking into account aggregated decisions, a typically employed criterion of the image quality assessment (IQA) performance was used to provide a more thorough analysis. The IQA performance of radiologists was evaluated by comparing the Spearman correlation coefficients, ρ, between individual scores with the mean opinion scores (MOS) composed of the subjective opinions of the remaining professionals. Results The experiments show that there is a significant agreement among radiologists (κ=0.12; 95% confidence interval [CI]: 0.118, 0.121; P<0.001) on the quality of the assessed images. The resulted κ is strongly affected by the subjectivity of the assigned scores, separately presenting close scores. Therefore, the ρ was used to identify poor performance cases and to confirm the consistency of the majority of collected scores (ρmean = 0.5706). The results for interns (ρmean = 0.6868) supports the finding that the quality assessment of MR images can be successfully taught. Conclusions The agreement observed among radiologists from different imaging centers confirms the subjectivity of the perception of MR images. It was shown that the image content and severity of distortions affect the IQA. Furthermore, the study highlights the importance of the psychosomatic condition of the observers and their attitude.


Sensors ◽  
2020 ◽  
Vol 20 (22) ◽  
pp. 6457
Author(s):  
Hayat Ullah ◽  
Muhammad Irfan ◽  
Kyungjin Han ◽  
Jong Weon Lee

Due to recent advancements in virtual reality (VR) and augmented reality (AR), the demand for high quality immersive contents is a primary concern for production companies and consumers. Similarly, the topical record-breaking performance of deep learning in various domains of artificial intelligence has extended the attention of researchers to contribute to different fields of computer vision. To ensure the quality of immersive media contents using these advanced deep learning technologies, several learning based Stitched Image Quality Assessment methods have been proposed with reasonable performances. However, these methods are unable to localize, segment, and extract the stitching errors in panoramic images. Further, these methods used computationally complex procedures for quality assessment of panoramic images. With these motivations, in this paper, we propose a novel three-fold Deep Learning based No-Reference Stitched Image Quality Assessment (DLNR-SIQA) approach to evaluate the quality of immersive contents. In the first fold, we fined-tuned the state-of-the-art Mask R-CNN (Regional Convolutional Neural Network) on manually annotated various stitching error-based cropped images from the two publicly available datasets. In the second fold, we segment and localize various stitching errors present in the immersive contents. Finally, based on the distorted regions present in the immersive contents, we measured the overall quality of the stitched images. Unlike existing methods that only measure the quality of the images using deep features, our proposed method can efficiently segment and localize stitching errors and estimate the image quality by investigating segmented regions. We also carried out extensive qualitative and quantitative comparison with full reference image quality assessment (FR-IQA) and no reference image quality assessment (NR-IQA) on two publicly available datasets, where the proposed system outperformed the existing state-of-the-art techniques.


2021 ◽  
Vol 2021 (9) ◽  
pp. 256-1-256-11
Author(s):  
Rafael Diniz ◽  
Pedro Garcia Freitas ◽  
Mylène Farias

In recent years, PCs have become very popular for a wide range of applications, such as immersive virtual reality scenarios. As a consequence, in the last couple of years, there has been a great effort to develop novel acquisition, representation, compression, and transmission solutions for PC contents in the research community. In particular, the development of objective quality assessment methods that are able to predict the perceptual quality of PCs. In this paper, we present an effective novel method for assessing the quality of PCs, which is based on descriptors that extract perceptual color distance-based texture information of PC contents, called Perceptual Color Distance Patterns (PCDP). In this framework, the statistics of the extracted information are used to model the PC visual quality. Experimental results show that the proposed framework exhibit good and robust performance when compared with several state-of-the-art point cloud quality assessment (PCQA) methods.


2021 ◽  
pp. 226-237
Author(s):  
Sigan Yao ◽  
Yiqin Zhu ◽  
Lingyu Liang ◽  
Tao Wang

Entropy ◽  
2021 ◽  
Vol 23 (6) ◽  
pp. 746
Author(s):  
Shouxin Liu ◽  
Wei Long ◽  
Lei He ◽  
Yanyan Li ◽  
Wei Ding

We proposed the Retinex-based fast algorithm (RBFA) to achieve low-light image enhancement in this paper, which can restore information that is covered by low illuminance. The proposed algorithm consists of the following parts. Firstly, we convert the low-light image from the RGB (red, green, blue) color space to the HSV (hue, saturation, value) color space and use the linear function to stretch the original gray level dynamic range of the V component. Then, we estimate the illumination image via adaptive gamma correction and use the Retinex model to achieve the brightness enhancement. After that, we further stretch the gray level dynamic range to avoid low image contrast. Finally, we design another mapping function to achieve color saturation correction and convert the enhanced image from the HSV color space to the RGB color space after which we can obtain the clear image. The experimental results show that the enhanced images with the proposed method have better qualitative and quantitative evaluations and lower computational complexity than other state-of-the-art methods.


Author(s):  
Choundur Vishnu

Great quality images and pictures are remarkable for some perceptions. Nonetheless, not each and every images are in acceptable features and quality as they are capture in non-identical light atmosphere. At the point when an image is capture in a low light state the pixel esteems are in a low-esteem range, which will cause image quality to decrease evidently. Since the entire image shows up dull, it's difficult to recognize items or surfaces clearly. Thus, it is vital to improve the nature of low-light images. Low light image enhancement is required in numerous PC vision undertakings for object location and scene understanding. In some cases there is a condition when image caught in low light consistently experience the ill effects of low difference and splendor which builds the trouble of resulting undeniable level undertaking in incredible degree. Low light image improvement utilizing convolutional neural network framework accepts dull or dark images as information and creates brilliant images as a yield without upsetting the substance of the image. So understanding the scene caught through image becomes simpler task.


2021 ◽  
Vol 12 ◽  
Author(s):  
Nandhini Abirami R. ◽  
Durai Raj Vincent P. M.

Image enhancement is considered to be one of the complex tasks in image processing. When the images are captured under dim light, the quality of the images degrades due to low visibility degenerating the vision-based algorithms’ performance that is built for very good quality images with better visibility. After the emergence of a deep neural network number of methods has been put forward to improve images captured under low light. But, the results shown by existing low-light enhancement methods are not satisfactory because of the lack of effective network structures. A low-light image enhancement technique (LIMET) with a fine-tuned conditional generative adversarial network is presented in this paper. The proposed approach employs two discriminators to acquire a semantic meaning that imposes the obtained results to be realistic and natural. Finally, the proposed approach is evaluated with benchmark datasets. The experimental results highlight that the presented approach attains state-of-the-performance when compared to existing methods. The models’ performance is assessed using Visual Information Fidelitysse, which assesses the generated image’s quality over the degraded input. VIF obtained for different datasets using the proposed approach are 0.709123 for LIME dataset, 0.849982 for DICM dataset, 0.619342 for MEF dataset.


Sign in / Sign up

Export Citation Format

Share Document