scholarly journals Improving Color Space Conversion for Camera-Captured Images via Wide-Gamut Metadata

2020 ◽  
Vol 2020 (28) ◽  
pp. 193-198
Author(s):  
Hoang Le ◽  
Mahmoud Afifi ◽  
Michael S. Brown

Color space conversion is the process of converting color values in an image from one color space to another. Color space conversion is challenging because different color spaces have different sized gamuts. For example, when converting an image encoded in a medium-sized color gamut (e.g., AdobeRGB or Display-P3) to a small color gamut (e.g., sRGB), color values may need to be compressed in a many-to-one manner (i.e., multiple colors in the source gamut will map to a single color in the target gamut). If we try to convert this sRGB-encoded image back to a wider gamut color encoding, it can be challenging to recover the original colors due to the color fidelity loss. We propose a method to address this problem by embedding wide-gamut metadata inside saved images captured by a camera. Our key insight is that in the camera hardware, a captured image is converted to an intermediate wide-gamut color space (i.e., ProPhoto) as part of the processing pipeline. This wide-gamut image representation is then saved to a display color space and saved in an image format such as JPEG or HEIC. Our method proposes to include a small sub-sampling of the color values from the ProPhoto image state in the camera to the final saved JPEG/HEIC image. We demonstrate that having this additional wide-gamut metadata available during color space conversion greatly assists in constructing a color mapping function to convert between color spaces. Our experiments show our metadata-assisted color mapping method provides a notable improvement (up to 60% in terms of E) over conventional color space methods using perceptual rendering intent. In addition, we show how to extend our approach to perform adaptive color space conversion based spatially over the image for additional improvements.

2012 ◽  
Vol 430-432 ◽  
pp. 838-841
Author(s):  
Wen Ge Chen

This paper is based on digital image color information reproduction error in a different color gamut,Through the different color gamut mapping method, image processing software Photoshop is used to make experiment and to obtain the corresponding image effect. Using digital presses to print out and use Spectrodensitometer measure the corresponding data.Using Excel software for data processing and analysis, digital image color information of loss situation is obtained in RGB and CMYK color space, It can provide certain basis for control of the color loss.


2019 ◽  
Vol 2019 (1) ◽  
pp. 86-90
Author(s):  
Hakki Can Karaimer ◽  
Michael S. Brown

Most modern cameras allow captured images to be saved in two color spaces: (1) raw-RGB and (2) standard RGB (sRGB). The raw-RGB image represents a scene-referred sensor image whose RGB values are specific to the color sensitivities of the sensor's color filter array. The sRGB image represents a display-referred image that has been rendered through the camera's image signal processor (ISP). The rendering process involves several camera-specific photo-finishing manipulations intended to make the sRGB image visually pleasing. For applications that want to use a camera for purposes beyond photography, both the raw-RGB and sRGB color spaces are undesirable. For example, because the raw-RGB color space is dependent on the camera's sensor, it is challenging to develop applications that work across multiple cameras. Similarly, the camera-specific photo-finishing operations used to render sRGB images also hinder applications intended to run on different cameras. Interestingly, the ISP camera pipeline includes a colorimetric conversion stage where the raw-RGB images are converted to a device-independent color space. However, this image state is not accessible. In this paper, we advocate for the ability to access the colorimetric image state and recommend that cameras output a third image format that is based on this device-independent colorimetric space. To this end, we perform experiments to demonstrate that image pixel values in a colorimetric space are more similar across different makes and models than sRGB and raw-RGB.


Author(s):  
Sumitra Kisan ◽  
Sarojananda Mishra ◽  
Ajay Chawda ◽  
Sanjay Nayak

This article describes how the term fractal dimension (FD) plays a vital role in fractal geometry. It is a degree that distinguishes the complexity and the irregularity of fractals, denoting the amount of space filled up. There are many procedures to evaluate the dimension for fractal surfaces, like box count, differential box count, and the improved differential box count method. These methods are basically used for grey scale images. The authors' objective in this article is to estimate the fractal dimension of color images using different color models. The authors have proposed a novel method for the estimation in CMY and HSV color spaces. In order to achieve the result, they performed test operation by taking number of color images in RGB color space. The authors have presented their experimental results and discussed the issues that characterize the approach. At the end, the authors have concluded the article with the analysis of calculated FDs for images with different color space.


2013 ◽  
Vol 333-335 ◽  
pp. 992-997
Author(s):  
Yun Lu Ge ◽  
Hui Han ◽  
Xiao Dong Sun ◽  
Sheng Pin Wang ◽  
Sheng Yun Ji

Most of watermarking algorithms are for digital grey image, which are not robust against the attacks of print-scan process, and the embedded information capacity is small. To solve these problem, a new method based on DWT transform and Walsh orthogonal transform for the print-scan process of digital color image was proposed. The method chosed the color spaces conversion from RGB to CIEL*a*b* for digital color image. The low frequency components of the DWT transform image was embed the watermark. The results show that the correlation of watermark is improved using Walsh orthogonal transform, the watermark extraction rate is high and image watermark is distinct and readable after print-scan process. And this method is robust against the various attacks of the print-scan process, such as color space conversion, image halftone, D/A conversion, A/D conversion, scaling, rotation, cropping, skew, and random noise signals.


2014 ◽  
Vol 2014 ◽  
pp. 1-10 ◽  
Author(s):  
Dina Khattab ◽  
Hala Mousher Ebied ◽  
Ashraf Saad Hussein ◽  
Mohamed Fahmy Tolba

This paper presents a comparative study using different color spaces to evaluate the performance of color image segmentation using the automatic GrabCut technique. GrabCut is considered as one of the semiautomatic image segmentation techniques, since it requires user interaction for the initialization of the segmentation process. The automation of the GrabCut technique is proposed as a modification of the original semiautomatic one in order to eliminate the user interaction. The automatic GrabCut utilizes the unsupervised Orchard and Bouman clustering technique for the initialization phase. Comparisons with the original GrabCut show the efficiency of the proposed automatic technique in terms of segmentation, quality, and accuracy. As no explicit color space is recommended for every segmentation problem, automatic GrabCut is applied withRGB,HSV,CMY,XYZ, andYUVcolor spaces. The comparative study and experimental results using different color images show thatRGBcolor space is the best color space representation for the set of the images used.


2016 ◽  
Author(s):  
Timothée-Florian Bronner ◽  
Ronan Boitard ◽  
Mahsa T. Pourazad ◽  
Panos Nasiopoulos ◽  
Touradj Ebrahimi

2020 ◽  
Author(s):  
Dalí Dos Santos ◽  
Adriano Silva ◽  
Paulo De Faria ◽  
Bruno Travençolo ◽  
Marcelo Do Nascimento

Oral epithelial dysplasia is a common precancerous lesion type that can be graded as mild, moderate and severe. Although not all oral epithelial dysplasia become cancer over time, this premalignant condition has a significant rate of progressing to cancer and the early treatment has been shown to be considerably more successful. The diagnosis and distinctions between mild, moderate, and severe grades are made by pathologists through a complex and time-consuming process where some cytological features, including nuclear shape, are analysed. The use of computer-aided diagnosis can be applied as a tool to aid and enhance the pathologist decisions. Recently, deep learning based methods are earning more and more attention and have been successfully applied to nuclei segmentation problems in several scenarios. In this paper, we evaluated the impact of different color spaces transformations for automated nuclei segmentation on histological images of oral dysplastic tissues using fully convolutional neural networks (CNN). The CNN were trained using different color spaces from a dataset of tongue images from mice diagnosed with oral epithelial dysplasia. The CIE L*a*b* color space transformation achieved the best averaged accuracy over all analyzed color space configurations (88.2%). The results show that the chrominance information, or the color values, does not play the most significant role for nuclei segmentation purpose on a mice tongue histopathological images dataset.


2010 ◽  
Author(s):  
Robert Tamburo

This paper describes a set of pixel accessors that transform RGB pixel values to a different color space. Accessors for the HSI, XYZ, Yuv, YUV, HSV, Lab, Luv, HSL, CMY, and CMYK color spaces are provided here. This paper is accompanied with source code for the pixel accessors and test, test images and parameters, and expected output images.Note: Set() methods are incorrect. Will provide revision by 12.17.2010.


2016 ◽  
Vol 2016 ◽  
pp. 1-12 ◽  
Author(s):  
Xin Jin ◽  
Rencan Nie ◽  
Dongming Zhou ◽  
Quan Wang ◽  
Kangjian He

This paper proposed an effective multifocus color image fusion algorithm based on nonsubsampled shearlet transform (NSST) and pulse coupled neural networks (PCNN); the algorithm can be used in different color spaces. In this paper, we take HSV color space as an example, H component is clustered by adaptive simplified PCNN (S-PCNN), and then the H component is fused according to oscillation frequency graph (OFG) of S-PCNN; at the same time, S and V components are decomposed by NSST, and different fusion rules are utilized to fuse the obtained results. Finally, inverse HSV transform is performed to get the RGB color image. The experimental results indicate that the proposed color image fusion algorithm is more efficient than other common color image fusion algorithms.


2020 ◽  
Vol 9 (2) ◽  
pp. 1011-1018

In this paper we present an empirical examination of deep convolution neural network (DCNN) performance in different color spaces for the classical problem of image recognition/classification. Most such deep learning architectures or networks are applied on RGB color space image data set, so our objective is to study DCNNs performance in other color spaces. We describe the design of our novel experiment and present results on whether deep learning networks for image recognition task is invariant to color spaces or not. In this study, we have analyzed the performance of 3 popular DCNNs (VGGNet, ResNet, GoogleNet) by providing input images in 5 different color spaces(RGB, normalized RGB, YCbCr, HSV , CIE-Lab) and compared performance in terms of test accuracy, test loss, and validation loss. All these combination of networks and color spaces are investigated on two datasets- CIFAR 10 and LINNAEUS 5. Our experimental results show that CNNs are variant to color spaces as different color spaces have different performance results for image classification task.


Sign in / Sign up

Export Citation Format

Share Document