Removing color cast of underwater images through non-constant color constancy hypothesis

Author(s):  
Birgit Henke ◽  
Matthias Vahl ◽  
Zhiliang Zhou
2015 ◽  
Vol 731 ◽  
pp. 18-21
Author(s):  
Qiang Liu ◽  
Xiao Xia Wan ◽  
Zhen Liu ◽  
Peng Sun

Color constancy is a key metric for evaluating the color reproduction performance. This contribution proposed a color constancy based spectral separation method for muti-ink printers from the prospect of color perception. Basing on our previously developped spectral printer modeling workflow, a novel color constancy based spectral separation method for muti-ink printers was proposed, which achieved high-level color-constant color reproduction.The experiment results shows that the workflow described in the paper not only could makes full use of device gamut, but also improves the comprehensive color constancy performance obviously. Averagely speaking, the Color Inconstancy Index of reproduced colors is reduced from 2.884 △E00 to 2.016 △E00 , while maintaining reasonable spectral and colorimetric reproduction accuracy.


Author(s):  
Samarth Borkar ◽  
Sanjiv V. Bonde

<span lang="EN-IN">Underwater images are prone to contrast loss, limited visibility, and undesirable color cast. For underwater computer vision and pattern recognition algorithms, these images need to be pre-processed. We have addressed a novel solution to this problem by proposing fully automated underwater image dehazing using multimodal DWT fusion. Inputs for the combinational image fusion scheme are derived from Singular Value Decomposition (SVD) and Discrete Wavelet Transform (DWT) for contrast enhancement in HSV color space and color constancy using Shades of Gray algorithm respectively. To appraise the work conducted, the visual and quantitative analysis is performed. The restored images demonstrate improved contrast and effective enhancement in overall image quality and visibility. The proposed algorithm performs on par with the recent underwater dehazing techniques.</span>


2020 ◽  
Vol 64 (5) ◽  
pp. 50411-1-50411-8
Author(s):  
Hoda Aghaei ◽  
Brian Funt

Abstract For research in the field of illumination estimation and color constancy, there is a need for ground-truth measurement of the illumination color at many locations within multi-illuminant scenes. A practical approach to obtaining such ground-truth illumination data is presented here. The proposed method involves using a drone to carry a gray ball of known percent surface spectral reflectance throughout a scene while photographing it frequently during the flight using a calibrated camera. The captured images are then post-processed. In the post-processing step, machine vision techniques are used to detect the gray ball within each frame. The camera RGB of light reflected from the gray ball provides a measure of the illumination color at that location. In total, the dataset contains 30 scenes with 100 illumination measurements on average per scene. The dataset is available for download free of charge.


2019 ◽  
Vol 33 (2) ◽  
pp. 113-123
Author(s):  
G. I. Rozhkova ◽  
E. N. Iomdina ◽  
O. M. Selina ◽  
A. V. Belokopytov ◽  
P. P. Nikolayev

Author(s):  
Joshua Gert

This chapter presents an account of color constancy that explains a well-known division in the data from color-constancy experiments: So-called “paper matches” exhibit a much higher level of constancy than so-called “hue-saturation matches.” It argues that the visual representation of objective color is the representation of something associated with a function from viewing circumstances to color appearances. Thus, a relatively robust constancy in the representation of objective color is perfectly consistent with a relatively less robust level of constancy in color appearance. The account also endorses Hilbert’s idea that we can represent the color of the illumination on a surface as well as the color of the surface itself. Finally, the chapter addresses an objection to the hybrid view that notes our capacity to make very fine-grained distinctions between the objective colors of surfaces.


2012 ◽  
Vol 34 (5) ◽  
pp. 918-929 ◽  
Author(s):  
A. Gijsenij ◽  
T. Gevers ◽  
J. van de Weijer

2020 ◽  
Vol 10 (18) ◽  
pp. 6392
Author(s):  
Xieliu Yang ◽  
Chenyu Yin ◽  
Ziyu Zhang ◽  
Yupeng Li ◽  
Wenfeng Liang ◽  
...  

Recovering correct or at least realistic colors of underwater scenes is a challenging issue for image processing due to the unknown imaging conditions including the optical water type, scene location, illumination, and camera settings. With the assumption that the illumination of the scene is uniform, a chromatic adaptation-based color correction technology is proposed in this paper to remove the color cast using a single underwater image without any other information. First, the underwater RGB image is first linearized to make its pixel values proportional to the light intensities arrived at the pixels. Second, the illumination is estimated in a uniform chromatic space based on the white-patch hypothesis. Third, the chromatic adaptation transform is implemented in the device-independent XYZ color space. Qualitative and quantitative evaluations both show that the proposed method outperforms the other test methods in terms of color restoration, especially for the images with severe color cast. The proposed method is simple yet effective and robust, which is helpful in obtaining the in-air images of underwater scenes.


Sign in / Sign up

Export Citation Format

Share Document