scholarly journals RGB Image Color Space Transformations

2010 ◽  
Author(s):  
Robert Tamburo

This paper describes a set of pixel accessors that transform RGB pixel values to a different color space. Accessors for the HSI, XYZ, Yuv, YUV, HSV, Lab, Luv, HSL, CMY, and CMYK color spaces are provided here. This paper is accompanied with source code for the pixel accessors and test, test images and parameters, and expected output images.Note: Set() methods are incorrect. Will provide revision by 12.17.2010.

Author(s):  
Sumitra Kisan ◽  
Sarojananda Mishra ◽  
Ajay Chawda ◽  
Sanjay Nayak

This article describes how the term fractal dimension (FD) plays a vital role in fractal geometry. It is a degree that distinguishes the complexity and the irregularity of fractals, denoting the amount of space filled up. There are many procedures to evaluate the dimension for fractal surfaces, like box count, differential box count, and the improved differential box count method. These methods are basically used for grey scale images. The authors' objective in this article is to estimate the fractal dimension of color images using different color models. The authors have proposed a novel method for the estimation in CMY and HSV color spaces. In order to achieve the result, they performed test operation by taking number of color images in RGB color space. The authors have presented their experimental results and discussed the issues that characterize the approach. At the end, the authors have concluded the article with the analysis of calculated FDs for images with different color space.


2014 ◽  
Vol 2014 ◽  
pp. 1-10 ◽  
Author(s):  
Dina Khattab ◽  
Hala Mousher Ebied ◽  
Ashraf Saad Hussein ◽  
Mohamed Fahmy Tolba

This paper presents a comparative study using different color spaces to evaluate the performance of color image segmentation using the automatic GrabCut technique. GrabCut is considered as one of the semiautomatic image segmentation techniques, since it requires user interaction for the initialization of the segmentation process. The automation of the GrabCut technique is proposed as a modification of the original semiautomatic one in order to eliminate the user interaction. The automatic GrabCut utilizes the unsupervised Orchard and Bouman clustering technique for the initialization phase. Comparisons with the original GrabCut show the efficiency of the proposed automatic technique in terms of segmentation, quality, and accuracy. As no explicit color space is recommended for every segmentation problem, automatic GrabCut is applied withRGB,HSV,CMY,XYZ, andYUVcolor spaces. The comparative study and experimental results using different color images show thatRGBcolor space is the best color space representation for the set of the images used.


2020 ◽  
Vol 2020 (28) ◽  
pp. 193-198
Author(s):  
Hoang Le ◽  
Mahmoud Afifi ◽  
Michael S. Brown

Color space conversion is the process of converting color values in an image from one color space to another. Color space conversion is challenging because different color spaces have different sized gamuts. For example, when converting an image encoded in a medium-sized color gamut (e.g., AdobeRGB or Display-P3) to a small color gamut (e.g., sRGB), color values may need to be compressed in a many-to-one manner (i.e., multiple colors in the source gamut will map to a single color in the target gamut). If we try to convert this sRGB-encoded image back to a wider gamut color encoding, it can be challenging to recover the original colors due to the color fidelity loss. We propose a method to address this problem by embedding wide-gamut metadata inside saved images captured by a camera. Our key insight is that in the camera hardware, a captured image is converted to an intermediate wide-gamut color space (i.e., ProPhoto) as part of the processing pipeline. This wide-gamut image representation is then saved to a display color space and saved in an image format such as JPEG or HEIC. Our method proposes to include a small sub-sampling of the color values from the ProPhoto image state in the camera to the final saved JPEG/HEIC image. We demonstrate that having this additional wide-gamut metadata available during color space conversion greatly assists in constructing a color mapping function to convert between color spaces. Our experiments show our metadata-assisted color mapping method provides a notable improvement (up to 60% in terms of E) over conventional color space methods using perceptual rendering intent. In addition, we show how to extend our approach to perform adaptive color space conversion based spatially over the image for additional improvements.


2020 ◽  
Author(s):  
Dalí Dos Santos ◽  
Adriano Silva ◽  
Paulo De Faria ◽  
Bruno Travençolo ◽  
Marcelo Do Nascimento

Oral epithelial dysplasia is a common precancerous lesion type that can be graded as mild, moderate and severe. Although not all oral epithelial dysplasia become cancer over time, this premalignant condition has a significant rate of progressing to cancer and the early treatment has been shown to be considerably more successful. The diagnosis and distinctions between mild, moderate, and severe grades are made by pathologists through a complex and time-consuming process where some cytological features, including nuclear shape, are analysed. The use of computer-aided diagnosis can be applied as a tool to aid and enhance the pathologist decisions. Recently, deep learning based methods are earning more and more attention and have been successfully applied to nuclei segmentation problems in several scenarios. In this paper, we evaluated the impact of different color spaces transformations for automated nuclei segmentation on histological images of oral dysplastic tissues using fully convolutional neural networks (CNN). The CNN were trained using different color spaces from a dataset of tongue images from mice diagnosed with oral epithelial dysplasia. The CIE L*a*b* color space transformation achieved the best averaged accuracy over all analyzed color space configurations (88.2%). The results show that the chrominance information, or the color values, does not play the most significant role for nuclei segmentation purpose on a mice tongue histopathological images dataset.


2016 ◽  
Vol 2016 ◽  
pp. 1-12 ◽  
Author(s):  
Xin Jin ◽  
Rencan Nie ◽  
Dongming Zhou ◽  
Quan Wang ◽  
Kangjian He

This paper proposed an effective multifocus color image fusion algorithm based on nonsubsampled shearlet transform (NSST) and pulse coupled neural networks (PCNN); the algorithm can be used in different color spaces. In this paper, we take HSV color space as an example, H component is clustered by adaptive simplified PCNN (S-PCNN), and then the H component is fused according to oscillation frequency graph (OFG) of S-PCNN; at the same time, S and V components are decomposed by NSST, and different fusion rules are utilized to fuse the obtained results. Finally, inverse HSV transform is performed to get the RGB color image. The experimental results indicate that the proposed color image fusion algorithm is more efficient than other common color image fusion algorithms.


2020 ◽  
Vol 9 (2) ◽  
pp. 1011-1018

In this paper we present an empirical examination of deep convolution neural network (DCNN) performance in different color spaces for the classical problem of image recognition/classification. Most such deep learning architectures or networks are applied on RGB color space image data set, so our objective is to study DCNNs performance in other color spaces. We describe the design of our novel experiment and present results on whether deep learning networks for image recognition task is invariant to color spaces or not. In this study, we have analyzed the performance of 3 popular DCNNs (VGGNet, ResNet, GoogleNet) by providing input images in 5 different color spaces(RGB, normalized RGB, YCbCr, HSV , CIE-Lab) and compared performance in terms of test accuracy, test loss, and validation loss. All these combination of networks and color spaces are investigated on two datasets- CIFAR 10 and LINNAEUS 5. Our experimental results show that CNNs are variant to color spaces as different color spaces have different performance results for image classification task.


2019 ◽  
Author(s):  
Wilson Castro ◽  
Jimy Oblitas ◽  
Miguel De-la-Torre ◽  
Carlos Cotrina ◽  
Karen Bazán ◽  
...  

The classification of fresh fruits according to their ripeness is typically a subjective and tedious task; consequently, there is growing interest in the use of non-contact techniques such as those based on computer vision and machine learning. In this paper, we propose the use of non-intrusive techniques for the classification of Cape gooseberry fruits. The proposal is based on the use of machine learning techniques combined with different color spaces. Given the success of techniques such as artificial neural networks,support vector machines, decision trees, and K-nearest neighbors in addressing classification problems, we decided to use these approaches in this research work. A sample of 926 Cape gooseberry fruits was obtained, and fruits were classified manually according to their level of ripeness into seven different classes. Images of each fruit were acquired in the RGB format through a system developed for this purpose. These images were preprocessed, filtered and segmented until the fruits were identified. For each piece of fruit, the median color parameter values in the RGB space were obtained, and these results were subsequently transformed into the HSV and L*a*b* color spaces. The values of each piece of fruit in the three color spaces and their corresponding degrees of ripeness were arranged for use in the creation, testing, and comparison of the developed classification models. The classification of gooseberry fruits by ripening level was found to be sensitive to both the color space used and the classification technique, e.g., the models based on decision trees are the most accurate, and the models based on the L*a*b* color space obtain the best mean accuracy. However, the model that best classifies the cape gooseberry fruits based on ripeness level is that resulting from the combination of the SVM technique and the RGB color space.


Sensors ◽  
2019 ◽  
Vol 19 (9) ◽  
pp. 2130 ◽  
Author(s):  
Elie Zemmour ◽  
Polina Kurtser ◽  
Yael Edan

This paper presents an automatic parameter tuning procedure specially developed for a dynamic adaptive thresholding algorithm for fruit detection. One of the major algorithm strengths is its high detection performances using a small set of training images. The algorithm enables robust detection in highly-variable lighting conditions. The image is dynamically split into variably-sized regions, where each region has approximately homogeneous lighting conditions. Nine thresholds were selected to accommodate three different illumination levels for three different dimensions in four color spaces: RGB, HSI, LAB, and NDI. Each color space uses a different method to represent a pixel in an image: RGB (Red, Green, Blue), HSI (Hue, Saturation, Intensity), LAB (Lightness, Green to Red and Blue to Yellow) and NDI (Normalized Difference Index, which represents the normal difference between the RGB color dimensions). The thresholds were selected by quantifying the required relation between the true positive rate and false positive rate. A tuning process was developed to determine the best fit values of the algorithm parameters to enable easy adaption to different kinds of fruits (shapes, colors) and environments (illumination conditions). Extensive analyses were conducted on three different databases acquired in natural growing conditions: red apples (nine images with 113 apples), green grape clusters (129 images with 1078 grape clusters), and yellow peppers (30 images with 73 peppers). These databases are provided as part of this paper for future developments. The algorithm was evaluated using cross-validation with 70% images for training and 30% images for testing. The algorithm successfully detected apples and peppers in variable lighting conditions resulting with an F-score of 93.17% and 99.31% respectively. Results show the importance of the tuning process for the generalization of the algorithm to different kinds of fruits and environments. In addition, this research revealed the importance of evaluating different color spaces since for each kind of fruit, a different color space might be superior over the others. The LAB color space is most robust to noise. The algorithm is robust to changes in the threshold learned by the training process and to noise effects in images.


Author(s):  
PEICHUNG SHIH ◽  
CHENGJUN LIU

Content-based face image retrieval is concerned with computer retrieval of face images (of a given subject) based on the geometric or statistical features automatically derived from these images. It is well known that color spaces provide powerful information for image indexing and retrieval by means of color invariants, color histogram, color texture, etc. This paper assesses comparatively the performance of content-based face image retrieval in different color spaces using a standard algorithm, the Principal Component Analysis (PCA), which has become a popular algorithm in the face recognition community. In particular, we comparatively assess 12 color spaces (RGB, HSV, YUV, YCbCr, XYZ, YIQ, L*a*b*, U*V*W*, L*u*v*, I1I2I3, HSI, and rgb) by evaluating seven color configurations for every single color space. A color configuration is defined by an individual or a combination of color component images. Take the RGB color space as an example, possible color configurations are R, G, B, RG, RB, GB and RGB. Experimental results using 600 FERET color images corresponding to 200 subjects and 456 FRGC (Face Recognition Grand Challenge) color images of 152 subjects show that some color configurations, such as YV in the YUV color space and YI in the YIQ color space, help improve face retrieval performance.


2014 ◽  
Vol 2014 ◽  
pp. 1-10 ◽  
Author(s):  
Hayrettin Toylan ◽  
Hilmi Kuscu

This study was focused on the multicolor space which provides a better specification of the color and size of the apple in an image. In the study, a real-time machine vision system classifying apples into four categories with respect to color and size was designed. In the analysis, different color spaces were used. As a result, 97% identification success for the red fields of the apple was obtained depending on the values of the parameter “a” of CIEL*a*b*color space. Similarly, 94% identification success for the yellow fields was obtained depending on the values of the parameteryof CIEXYZcolor space. With the designed system, three kinds of apples (Golden, Starking, and Jonagold) were investigated by classifying them into four groups with respect to two parameters, color and size. Finally, 99% success rate was achieved in the analyses conducted for 595 apples.


Sign in / Sign up

Export Citation Format

Share Document