Color constancy models inspired by human visual system: Survey paper

Author(s):  
Mohammed Khalil ◽  
Jian-Ping Li ◽  
Kamlesh Kumar ◽  
Xiao-Long Tang ◽  
Ping Kuang
2003 ◽  
Vol 26 (1) ◽  
pp. 29-30 ◽  
Author(s):  
Brian V. Funt

AbstractByrne & Hilbert's thesis, that color be associated with reflectance-type, is questioned on the grounds that it is far from clear that the human visual system is able to determine a surface's reflectance-type with sufficient accuracy. In addition, a (friendly) suggestion is made as to how to amend the definition of reflectance-type in terms of CIE (Commission Internationale de l'Eclairage) coordinates under a canonical illuminant.


2020 ◽  
Vol 10 (12) ◽  
pp. 4395
Author(s):  
Jongsu Yoon ◽  
Yoonsik Choe

Retinex theory represents the human visual system by showing the relative reflectance of an object under various illumination conditions. A feature of this human visual system is color constancy, and the Retinex theory is designed in consideration of this feature. The Retinex algorithms have been popularly used to effectively decompose the illumination and reflectance of an object. The main aim of this paper is to study image enhancement using convolution sparse coding and sparse representations of the reflectance component in the Retinex model over a learned dictionary. To realize this, we use the convolutional sparse coding model to represent the reflectance component in detail. In addition, we propose that the reflectance component can be reconstructed using a trained general dictionary by using convolutional sparse coding from a large dataset. We use singular value decomposition in limited memory to construct a best reflectance dictionary. This allows the reflectance component to provide improved visual quality over conventional methods, as shown in the experimental results. Consequently, we can reduce the difference in perception between humans and machines through the proposed Retinex-based image enhancement.


2002 ◽  
Vol 69 (5) ◽  
pp. 327 ◽  
Author(s):  
N. N. Krasilnikov ◽  
O. I. Krasilnikova ◽  
Yu. E. Shelepin

2020 ◽  
Vol 2020 (1) ◽  
pp. 60-64
Author(s):  
Altynay Kadyrova ◽  
Majid Ansari-Asl ◽  
Eva Maria Valero Benito

Colour is one of the most important appearance attributes in a variety of fields including both science and industry. The focus of this work is on cosmetics field and specifically on the performance of the human visual system on the selection of foundation makeup colour that best matches with the human skin colour. In many cases, colour evaluations tend to be subjective and vary from person to person thereby producing challenging problems to quantify colour for objective evaluations and measurements. Although many researches have been done on colour quantification in last few decades, to the best of our knowledge, this is the first study to evaluate objectively a consumer's visual system in skin colour matching through a psychophysical experiment under different illuminations exploiting spectral measurements. In this paper, the experiment setup is discussed and the results from the experiment are presented. The correlation between observers' skin colour evaluations by using PANTONE Skin Tone Guide samples and spectroradiometer is assessed. Moreover, inter and intra observer variability are considered and commented. The results reveal differences between nine ethnic groups, between two genders, and between the measurements under two illuminants (i.e.D65 and F (fluorescent)). The results further show that skin colour assessment was done better under D65 than under F illuminant. The human visual system was three times worse than instrument in colour matching in terms of colour difference between skin and PANTONE Skin Tone Guide samples. The observers tend to choose lighter, less reddish, and consequently paler colours as the best match to their skin colour. These results have practical applications. They can be used to design, for example, an application for foundation colour selection based on correlation between colour measurements and human visual system based subjective evaluations.


2012 ◽  
Vol 58 (2) ◽  
pp. 147-152
Author(s):  
Michal Mardiak ◽  
Jaroslav Polec

Objective Video Quality Method Based on Mutual Information and Human Visual SystemIn this paper we present the objective video quality metric based on mutual information and Human Visual System. The calculation of proposed metric consists of two stages. In the first stage of quality evaluation whole original and test sequence are pre-processed by the Human Visual System. In the second stage we calculate mutual information which has been utilized as the quality evaluation criteria. The mutual information was calculated between the frame from original sequence and the corresponding frame from test sequence. For this testing purpose we choose Foreman video at CIF resolution. To prove reliability of our metric were compared it with some commonly used objective methods for measuring the video quality. The results show that presented objective video quality metric based on mutual information and Human Visual System provides relevant results in comparison with results of other objective methods so it is suitable candidate for measuring the video quality.


Author(s):  
Wen-Han Zhu ◽  
Wei Sun ◽  
Xiong-Kuo Min ◽  
Guang-Tao Zhai ◽  
Xiao-Kang Yang

AbstractObjective image quality assessment (IQA) plays an important role in various visual communication systems, which can automatically and efficiently predict the perceived quality of images. The human eye is the ultimate evaluator for visual experience, thus the modeling of human visual system (HVS) is a core issue for objective IQA and visual experience optimization. The traditional model based on black box fitting has low interpretability and it is difficult to guide the experience optimization effectively, while the model based on physiological simulation is hard to integrate into practical visual communication services due to its high computational complexity. For bridging the gap between signal distortion and visual experience, in this paper, we propose a novel perceptual no-reference (NR) IQA algorithm based on structural computational modeling of HVS. According to the mechanism of the human brain, we divide the visual signal processing into a low-level visual layer, a middle-level visual layer and a high-level visual layer, which conduct pixel information processing, primitive information processing and global image information processing, respectively. The natural scene statistics (NSS) based features, deep features and free-energy based features are extracted from these three layers. The support vector regression (SVR) is employed to aggregate features to the final quality prediction. Extensive experimental comparisons on three widely used benchmark IQA databases (LIVE, CSIQ and TID2013) demonstrate that our proposed metric is highly competitive with or outperforms the state-of-the-art NR IQA measures.


Sign in / Sign up

Export Citation Format

Share Document