Beyond raw-RGB and sRGB: Advocating Access to a Colorimetric Image State

2019 ◽  
Vol 2019 (1) ◽  
pp. 86-90
Author(s):  
Hakki Can Karaimer ◽  
Michael S. Brown

Most modern cameras allow captured images to be saved in two color spaces: (1) raw-RGB and (2) standard RGB (sRGB). The raw-RGB image represents a scene-referred sensor image whose RGB values are specific to the color sensitivities of the sensor's color filter array. The sRGB image represents a display-referred image that has been rendered through the camera's image signal processor (ISP). The rendering process involves several camera-specific photo-finishing manipulations intended to make the sRGB image visually pleasing. For applications that want to use a camera for purposes beyond photography, both the raw-RGB and sRGB color spaces are undesirable. For example, because the raw-RGB color space is dependent on the camera's sensor, it is challenging to develop applications that work across multiple cameras. Similarly, the camera-specific photo-finishing operations used to render sRGB images also hinder applications intended to run on different cameras. Interestingly, the ISP camera pipeline includes a colorimetric conversion stage where the raw-RGB images are converted to a device-independent color space. However, this image state is not accessible. In this paper, we advocate for the ability to access the colorimetric image state and recommend that cameras output a third image format that is based on this device-independent colorimetric space. To this end, we perform experiments to demonstrate that image pixel values in a colorimetric space are more similar across different makes and models than sRGB and raw-RGB.

2020 ◽  
Vol 2020 (28) ◽  
pp. 193-198
Author(s):  
Hoang Le ◽  
Mahmoud Afifi ◽  
Michael S. Brown

Color space conversion is the process of converting color values in an image from one color space to another. Color space conversion is challenging because different color spaces have different sized gamuts. For example, when converting an image encoded in a medium-sized color gamut (e.g., AdobeRGB or Display-P3) to a small color gamut (e.g., sRGB), color values may need to be compressed in a many-to-one manner (i.e., multiple colors in the source gamut will map to a single color in the target gamut). If we try to convert this sRGB-encoded image back to a wider gamut color encoding, it can be challenging to recover the original colors due to the color fidelity loss. We propose a method to address this problem by embedding wide-gamut metadata inside saved images captured by a camera. Our key insight is that in the camera hardware, a captured image is converted to an intermediate wide-gamut color space (i.e., ProPhoto) as part of the processing pipeline. This wide-gamut image representation is then saved to a display color space and saved in an image format such as JPEG or HEIC. Our method proposes to include a small sub-sampling of the color values from the ProPhoto image state in the camera to the final saved JPEG/HEIC image. We demonstrate that having this additional wide-gamut metadata available during color space conversion greatly assists in constructing a color mapping function to convert between color spaces. Our experiments show our metadata-assisted color mapping method provides a notable improvement (up to 60% in terms of E) over conventional color space methods using perceptual rendering intent. In addition, we show how to extend our approach to perform adaptive color space conversion based spatially over the image for additional improvements.


2014 ◽  
Vol 696 ◽  
pp. 105-109
Author(s):  
Hao Ran Zhang ◽  
Wen Ping Ren ◽  
Wen Long Yin ◽  
Shao Feng Chen

Because of the powerful data processing ability of FPGA, the fast interpolation algorithm is used for Bayer format data which comes from CMOS sensor MT9M011 to convert to RGB image format. In the RGB color space to YCbCr space conversion stage,using color space conversion formula, combined with the characteristics of FPGA, realize the conversion of RGB to YCbCr. Finally, correctness is verified by the experimental results which use SignalTap II embedded logic analyzer.


Agriculture ◽  
2020 ◽  
Vol 11 (1) ◽  
pp. 6
Author(s):  
Ewa Ropelewska

The aim of this study was to evaluate the usefulness of the texture and geometric parameters of endocarp (pit) for distinguishing different cultivars of sweet cherries using image analysis. The textures from images converted to color channels and the geometric parameters of the endocarp (pits) of sweet cherry ‘Kordia’, ‘Lapins’, and ‘Büttner’s Red’ were calculated. For the set combining the selected textures from all color channels, the accuracy reached 100% when comparing ‘Kordia’ vs. ‘Lapins’ and ‘Kordia’ vs. ‘Büttner’s Red’ for all classifiers. The pits of ‘Kordia’ and ‘Lapins’, as well as ‘Kordia’ and ‘Büttner’s Red’ were also 100% correctly discriminated for discriminative models built separately for RGB, Lab and XYZ color spaces, G, L and Y color channels and for models combining selected textural and geometric features. For discrimination ‘Lapins’ and ‘Büttner’s Red’ pits, slightly lower accuracies were determined—up to 93% for models built based on textures selected from all color channels, 91% for the RGB color space, 92% for the Lab and XYZ color spaces, 84% for the G and L color channels, 83% for the Y channel, 94% for geometric features, and 96% for combined textural and geometric features.


2021 ◽  
Vol 13 (5) ◽  
pp. 939
Author(s):  
Yongan Xue ◽  
Jinling Zhao ◽  
Mingmei Zhang

To accurately extract cultivated land boundaries based on high-resolution remote sensing imagery, an improved watershed segmentation algorithm was proposed herein based on a combination of pre- and post-improvement procedures. Image contrast enhancement was used as the pre-improvement, while the color distance of the Commission Internationale de l´Eclairage (CIE) color space, including the Lab and Luv, was used as the regional similarity measure for region merging as the post-improvement. Furthermore, the area relative error criterion (δA), the pixel quantity error criterion (δP), and the consistency criterion (Khat) were used for evaluating the image segmentation accuracy. The region merging in Red–Green–Blue (RGB) color space was selected to compare the proposed algorithm by extracting cultivated land boundaries. The validation experiments were performed using a subset of Chinese Gaofen-2 (GF-2) remote sensing image with a coverage area of 0.12 km2. The results showed the following: (1) The contrast-enhanced image exhibited an obvious gain in terms of improving the image segmentation effect and time efficiency using the improved algorithm. The time efficiency increased by 10.31%, 60.00%, and 40.28%, respectively, in the RGB, Lab, and Luv color spaces. (2) The optimal segmentation and merging scale parameters in the RGB, Lab, and Luv color spaces were C for minimum areas of 2000, 1900, and 2000, and D for a color difference of 1000, 40, and 40. (3) The algorithm improved the time efficiency of cultivated land boundary extraction in the Lab and Luv color spaces by 35.16% and 29.58%, respectively, compared to the RGB color space. The extraction accuracy was compared to the RGB color space using the δA, δP, and Khat, that were improved by 76.92%, 62.01%, and 16.83%, respectively, in the Lab color space, while they were 55.79%, 49.67%, and 13.42% in the Luv color space. (4) Through the visual comparison, time efficiency, and segmentation accuracy, the comprehensive extraction effect using the proposed algorithm was obviously better than that of RGB color-based space algorithm. The established accuracy evaluation indicators were also proven to be consistent with the visual evaluation. (5) The proposed method has a satisfying transferability by a wider test area with a coverage area of 1 km2. In addition, the proposed method, based on the image contrast enhancement, was to perform the region merging in the CIE color space according to the simulated immersion watershed segmentation results. It is a useful attempt for the watershed segmentation algorithm to extract cultivated land boundaries, which provides a reference for enhancing the watershed algorithm.


Author(s):  
Sumitra Kisan ◽  
Sarojananda Mishra ◽  
Ajay Chawda ◽  
Sanjay Nayak

This article describes how the term fractal dimension (FD) plays a vital role in fractal geometry. It is a degree that distinguishes the complexity and the irregularity of fractals, denoting the amount of space filled up. There are many procedures to evaluate the dimension for fractal surfaces, like box count, differential box count, and the improved differential box count method. These methods are basically used for grey scale images. The authors' objective in this article is to estimate the fractal dimension of color images using different color models. The authors have proposed a novel method for the estimation in CMY and HSV color spaces. In order to achieve the result, they performed test operation by taking number of color images in RGB color space. The authors have presented their experimental results and discussed the issues that characterize the approach. At the end, the authors have concluded the article with the analysis of calculated FDs for images with different color space.


Author(s):  
Felicia Anisoara Damian ◽  
Simona Moldovanu ◽  
Luminita Moraru

This study aims to investigate the ability of an artificial neural network to differentiate between malign and benign skin lesions based on two statistics terms and for RGB (R red, G green, B blue) and YIQ (Y luminance, and I and Q chromatic differences) color spaces. The targeted statistics texture features are skewness (S) and kurtosis (K) which are extracted from the histograms of each color channel corresponding to the color spaces and for the two classes of lesions: nevi and melanomas. The extracted data is used to train the Feed-Forward Back Propagation Networks (FFBPNs). The number of neurons in the hidden layer varies: it can be 8, 16, 24, or 32. The results indicate skewness features computed for the red channel in the RGB color space as the best choice to reach the goal of our study. The reported result shows the advantages of monochrome channels representation for skin lesions diagnosis.


Author(s):  
JINXIANG MA ◽  
Xinnan Fan ◽  
Simon X. Yang ◽  
Xuewu Zhang ◽  
Xifang Zhu

In order to improve contrast and restore color for underwater image captured by camera sensors without suffering from insufficient details and color cast, a fusion algorithm for image enhancement in different color spaces based on contrast limited adaptive histogram equalization (CLAHE) is proposed in this article. The original color image is first converted from RGB color space to two different special color spaces: YIQ and HSI. The color space conversion from RGB to YIQ is a linear transformation, while the RGB to HSI conversion is nonlinear. Then, the algorithm separately operates CLAHE in YIQ and HSI color spaces to obtain two different enhancement images. The luminance component (Y) in the YIQ color space and the intensity component (I) in the HSI color space are enhanced with CLAHE algorithm. The CLAHE has two key parameters: Block Size and Clip Limit, which mainly control the quality of CLAHE enhancement image. After that, the YIQ and HSI enhancement images are respectively converted backward to RGB color. When the three components of red, green, and blue are not coherent in the YIQ-RGB or HSI-RGB images, the three components will have to be harmonized with the CLAHE algorithm in RGB space. Finally, with 4 direction Sobel edge detector in the bounded general logarithm ratio operation, a self-adaptive weight selection nonlinear image enhancement is carried out to fuse YIQ-RGB and HSI-RGB images together to achieve the final fused image. The enhancement fusion algorithm has two key factors: average of Sobel edge detector and fusion coefficient, and these two factors determine the effects of enhancement fusion algorithm. A series of evaluate metrics such as mean, contrast, entropy, colorfulness metric (CM), mean square error (MSE) and peak signal to noise ratio (PSNR) are used to assess the proposed enhancement algorithm. The experiments results showed that the proposed algorithm provides more detail enhancement and higher values of colorfulness restoration as compared to other existing image enhancement algorithms. The proposed algorithm can suppress effectively noise interference, improve the image quality for underwater image availably.


Author(s):  
Ewa Ropelewska ◽  
Krzysztof P. Rutkowski

AbstractThe peaches belonging to different cultivars can be characterized by differentiation in properties. The aim of this study was to evaluate the usefulness of individual parts of fruit (skin, flesh, stone and seed) for cultivar discrimination of peaches based on textures determined using image analysis. Discriminant analysis was performed using the classifiers of Bayes net, logistic, SMO, multi-class classifier and random forest based on a set of combined textures selected from all color channels R, G, B, L, a, b, X, Y, Z and for textures selected separately for RGB, Lab and XYZ color spaces. In the case of sets of textures selected from all color channels (R, G, B, L, a, b, X, Y, Z), the accuracy of 100% was observed for flesh, stones and seeds for selected classifiers. The sets of textures selected from RGB color space produced the correctness equal to 100% in the case of flesh and seeds of peaches. In the case of Lab and XYZ color spaces, slightly lower accuracies than for RGB color space were obtained and the accuracy reaching 100% was noted only for the discrimination of seeds of peaches. The research proved the usefulness of selected texture parameters of fruit flesh, stones and seeds for successful discrimination of peach cultivars with an accuracy of 100%. The distinguishing between cultivars may be important for breeders, consumers and the peach industry for ensuring adequate processing conditions and equipment parameters. The cultivar identification of fruit by human may be characterized by large errors. The molecular or chemical methods may require special equipment or be time-consuming. The image analysis may ensure objective, rapid and relatively inexpensive procedure and high accuracy for peach cultivar discrimination.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
D. Granados-López ◽  
A. García-Rodríguez ◽  
S. García-Rodríguez ◽  
A. Suárez-García ◽  
M. Díez-Mediavilla ◽  
...  

Digital sky images are studied for the definition of sky conditions in accordance with the CIE Standard General Sky Guide. Likewise, adequate image-processing methods are analyzed that highlight key image information, prior to the application of Artificial Neural Network classification algorithms. Twenty-two image-processing methods are reviewed and applied to a broad and unbiased dataset of 1500 sky images recorded in Burgos, Spain, over an extensive experimental campaign. The dataset comprises one hundred images of each CIE standard sky type, previously classified from simultaneous sky scanner data. Color spaces, spectral features, and texture filters image-processing methods are applied. While the use of the traditional RGB color space for image-processing yielded good results (ANN accuracy equal to 86.6%), other color spaces, such as Hue Saturation Value (HSV), which may be more appropriate, increased the accuracy of their global classifications. The use of either the green or the blue monochromatic channels improved sky classification, both for the fifteen CIE standard sky types and for simpler classification into clear, partial, and overcast conditions. The main conclusion was that specific image-processing methods could improve ANN-algorithm accuracy, depending on the image information required for the classification problem.


2012 ◽  
Vol 22 ◽  
pp. 21-26 ◽  
Author(s):  
Jonathan Cepeda-Negrete ◽  
Raul E. Sanchez-Yanez

Color constancy is an important process in a number of vision tasks. Most devices for capturing images operate on the RGB color space and, usually, the processing of the images is in this space, although some processes have shown a better performance when a perceptual color space is used instead. In this paper, experiments on the White Patch Retinex, a color constancy algorithm commonly used, are performed in two color spaces, RGB and CIELAB, for comparison purposes. Experimental results using an imagery set are analyzed using a no-reference quality metric and outcomes are discussed. It has been found that the White Patch Retinex algorithm shows a better performance in RGB than in CIELAB, but when color adjustments are implemented in sequence, firstly in CIELAB and then in RGB, much better results are obtained.


Sign in / Sign up

Export Citation Format

Share Document