color spaces
Recently Published Documents


TOTAL DOCUMENTS

545
(FIVE YEARS 137)

H-INDEX

29
(FIVE YEARS 4)

Author(s):  
Harsha B. K.

Abstract: Different colored digital images can be represented in a variety of color spaces. Red-Green-Blue is the most commonly used color space. That can be transformed into Luminance, Blue difference, Red difference. These color pixels' defined features provide strong information about whether they belong to human skin or not. A novel color-based feature extraction method is proposed in this paper, which makes use of both red, green, blue, luminance, hue, and saturation information. The proposed method is used on an image database that contains people of various ages, races, and genders. The obtained features are used to segment the human skin using the Support-Vector- Machine algorithm, and the promising performance results of 89.86% accuracy are then compared to the most commonly used methods in the literature. Keywords: Skin segmentation, SVM, feature extraction, digital images


2021 ◽  
Vol 12 (3) ◽  
pp. 587-593
Author(s):  
Kalthom Ibrahim ◽  
Mohammed Abdallah Almaleeh ◽  
Moaawia Mohamed Ahmed ◽  
Dalia Mahmoud Adam

This paper presented simple approach that automatically detects Neisseria Bacteria cell in the cerebrospinal fluid smear images. The proposed methodology mainly consists of cerebrospinal fluid smear images acquisition, transformation form red, green, blue smear images in to other color spaces. This step followed by subbing images and segmenting the images to extracting the images features then validation and classifying the Bacteria images based in features extracted using neural networks. The proposed diagnosis for Neisseria Bacteria through neural network techniques has performed high-precision performance in some suggested groups.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
D. Granados-López ◽  
A. García-Rodríguez ◽  
S. García-Rodríguez ◽  
A. Suárez-García ◽  
M. Díez-Mediavilla ◽  
...  

Digital sky images are studied for the definition of sky conditions in accordance with the CIE Standard General Sky Guide. Likewise, adequate image-processing methods are analyzed that highlight key image information, prior to the application of Artificial Neural Network classification algorithms. Twenty-two image-processing methods are reviewed and applied to a broad and unbiased dataset of 1500 sky images recorded in Burgos, Spain, over an extensive experimental campaign. The dataset comprises one hundred images of each CIE standard sky type, previously classified from simultaneous sky scanner data. Color spaces, spectral features, and texture filters image-processing methods are applied. While the use of the traditional RGB color space for image-processing yielded good results (ANN accuracy equal to 86.6%), other color spaces, such as Hue Saturation Value (HSV), which may be more appropriate, increased the accuracy of their global classifications. The use of either the green or the blue monochromatic channels improved sky classification, both for the fifteen CIE standard sky types and for simpler classification into clear, partial, and overcast conditions. The main conclusion was that specific image-processing methods could improve ANN-algorithm accuracy, depending on the image information required for the classification problem.


Plants ◽  
2021 ◽  
Vol 10 (12) ◽  
pp. 2750
Author(s):  
Samuel Prieto-Benítez ◽  
Raquel Ruiz-Checa ◽  
Victoria Bermejo-Bermejo ◽  
Ignacio Gonzalez-Fernandez

Ozone (O3) effects on the visual attraction traits (color, perception and area) of petals are described for Erodium paularense, an endangered plant species. Plants were exposed to three O3 treatments: charcoal-filtered air (CFA), ambient (NFA) and ambient + 40 nL L−1 O3 (FU+) in open-top chambers. Changes in color were measured by spectral reflectance, from which the anthocyanin reflectance index (ARI) was calculated. Petal spectral reflectance was mapped onto color spaces of bees, flies and butterflies for studying color changes as perceived by different pollinator guilds. Ozone-induced increases in petal reflectance and a rise in ARI under NFA were observed. Ambient O3 levels also induced a partial change in the color perception of flies, with the number of petals seen as blue increasing to 53% compared to only 24% in CFA. Butterflies also showed the ability to partially perceive petal color changes, differentiating some CFA petals from NFA and FU+ petals through changes in the excitation of the UV photoreceptor. Importantly, O3 reduced petal area by 19.8 and 25% in NFA and FU+ relative to CFA, respectively. In sensitive species O3 may affect visual attraction traits important for pollination, and spectral reflectance is proposed as a novel method for studying O3 effects on flower color.


Author(s):  
Ewa Ropelewska ◽  
Anna Wrzodak

AbstractThe aim of the research was to compare the possibility of distinguishing the cultivars of processed beetroots using image analysis technique and sensory evaluation. The differentiation of processed samples was tested for freeze-dried beetroot ‘Czerwona Kula’ and ‘Cylindra’, lacto-fermented beetroot ‘Czerwona Kula’ and ‘Cylindra’, freeze-dried lacto-fermented beetroot ‘Czerwona Kula’ and ‘Cylindra’. The textures from the images of quarters of root slices, as well as sensory attributes evaluated by expert sensory assessors, were determined. The differences in the means of selected textures from color spaces Lab, RGB and XYZ for different cultivars of raw and processed beetroots were observed. The raw beetroots ‘Czerwona Kula’ and ‘Cylindra’ were discriminated with the accuracy of up to 94.5% for models built based on selected texture from color space RGB. In the case of processed beetroots ‘Czerwona Kula’ and ‘Cylindra’, the accuracy reached 96% (color space Lab) for freeze-dried beetroots, 99% (color space Lab) for lacto-fermented beetroots, 98.5% (color space Lab) for freeze-dried lacto-fermented beetroots. In the case of sensory attributes, no statistically significant differences were observed between the beetroot samples.


Sensors ◽  
2021 ◽  
Vol 21 (23) ◽  
pp. 8090
Author(s):  
Joel Vidal ◽  
Chyi-Yeu Lin ◽  
Robert Martí

Recently, 6D pose estimation methods have shown robust performance on highly cluttered scenes and different illumination conditions. However, occlusions are still challenging, with recognition rates decreasing to less than 10% for half-visible objects in some datasets. In this paper, we propose to use top-down visual attention and color cues to boost performance of a state-of-the-art method on occluded scenarios. More specifically, color information is employed to detect potential points in the scene, improve feature-matching, and compute more precise fitting scores. The proposed method is evaluated on the Linemod occluded (LM-O), TUD light (TUD-L), Tejani (IC-MI) and Doumanoglou (IC-BIN) datasets, as part of the SiSo BOP benchmark, which includes challenging highly occluded cases, illumination changing scenarios, and multiple instances. The method is analyzed and discussed for different parameters, color spaces and metrics. The presented results show the validity of the proposed approach and their robustness against illumination changes and multiple instance scenarios, specially boosting the performance on relatively high occluded cases. The proposed solution provides an absolute improvement of up to 30% for levels of occlusion between 40% to 50%, outperforming other approaches with a best overall recall of 71% for the LM-O, 92% for TUD-L, 99.3% for IC-MI and 97.5% for IC-BIN.


Author(s):  
Alexey Galuza ◽  
Olga Kostiuk ◽  
Alla Savchenko ◽  
Anastasiia Boiko

The work is devoted to the problem of comparing objects by color. The following statement of the problem is considered: among the set of objects it is necessary to find such an object, the color of which is most similar to the color of the given object. It is assumed that for each object only its spectrum (transmission, reflection, radiation) is known, which is an exhaustive characteristic of the color of the object. In addition, the spectrum of the radiation source is assumed to be known. The use of standard methods for determining color differences has shown that the problem does not have an unambiguous solution. Two approaches to its solution have been proposed: the first is based on the transition from the spectrum to color spaces with the subsequent calculation of the Euclidean distance, and the second is based on a direct comparison of the spectra as functional dependences of the intensity on the wavelength. Within each of the approaches, two criteria for the "similarity" of objects in color are proposed, and an original approach to assessing the effectiveness of these criteria is proposed. This approach is based on the use of expert assessments of the color proximity of glass samples with known transmission spectra from a standard set. For each sample from the set, experts selected the glass closest in color from the remaining ones, after which a generalized opinion of experts was formed. To obtain an assessment of the quality of each of the criteria, for each of them and for each test glass, the remaining samples were ranked in order of increasing color distance to the given test glass. After that, the results of the criteria were compared with the generalized opinion of experts. To make the comparison result "fuzzy", for each test glass it was proposed to consider a set of five glasses closest in color (for each of the criteria). The resulting estimates of the effectiveness of each of the criteria for a set of 89 glasses are obtained and an approach to the construction of more effective complex criteria is proposed.


Author(s):  
Lokesh Nandanwar ◽  
Palaiahnakote Shivakumara ◽  
Divya Krishnani ◽  
Raghavendra Ramachandra ◽  
Tong Lu ◽  
...  

Due to various applications, research on personal traits using information on social media has become an important area. In this paper, a new method for the classification of behavior-oriented social images uploaded on various social media platforms is presented. The proposed method introduces a multimodality concept using skin of different parts of human body and background information, such as indoor and outdoor environments. For each image, the proposed method detects skin candidate components based on R, G, B color spaces and entropy features. The iterative mutual nearest neighbor approach is proposed to detect accurate skin candidate components, which result in foreground components. Next, the proposed method detects the remaining part (other than skin components) as background components based on structure tensor of R, G, B color spaces, and Maximally Stable Extremal Regions (MSER ) concept in the wavelet domain. We then explore Hanman Transform for extracting context features from foreground and background components through clustering and fusion operation. These features are then fed to an SVM classifier for the classification of behavior-oriented images. Comprehensive experiments on 10-class datasets of Normal Behavior-Oriented Social media Image (NBSI) and Abnormal Behavior-Oriented Social media Image (ABSI) show that the proposed method is effective and outperforms the existing methods in terms of average classification rate. Also, the results on the benchmark dataset of five classes of personality traits and two classes of emotions of different facial expressions (FERPlus dataset) demonstrated the robustness of the proposed method over the existing methods.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
K. Upendra Raju ◽  
N. Amutha Prabha

PurposeSteganography is a data hiding technique used in the data security. while transmission of data through channel, no guarantee that the data is transmitted safely or not. Variety of data security techniques exists such as patch work, low bit rate data hiding, lossy compression etc. This paper aims to increase the security and robustness.Design/methodology/approachThis paper describes, an approach for multiple images steganography that is oriented on the combination of lifting wavelet transform (LWT) and discrete cosine transform (DCT). Here, we have one cover image and two secret images. The cover image is applied with one of the different noises like Gaussian, Salt & Pepper, Poisson, and speckle noises and converted into different color spaces of YCbCr, HSV, and Lab.FindingsDue to the vast development of Internet access and multimedia technology, it becomes very simple to hack and trace secret information. Using this steganography process in reversible data hiding (RDH) helps to prevent secret information.Originality/valueWe can divide the color space converted image into four sub-bands of images by using lifting wavelet transform. By selecting lower bands, the discrete cosine transform is computed for hiding two secret images into the cover image and again one of the transformed secret images is converted by using Arnold transform to get the encrypted/embedded/encoded image. To extract the Stego image, we can apply the revertible operation. For comparing the results, we can calculate PSNR, SSIM, and MSE values by applying the same process for all color spaces of YCbCr, HSV, and Lab. The experimental results give better performance when compared to all other spaces.


2021 ◽  
Vol 24 (3) ◽  
pp. 107-110
Author(s):  
Leonid D. Lozhkin ◽  
Alexander A. Kuzmenko

The equidistance of the color space plays a significant role in determining the color difference in color transmission systems. Strictly equal contrasting color spaces can be considered only those color spaces in which equal changes in the visual perception of color are provided with an equal change in the color coordinates in this color space. Currently, the International Commission on Lighting (CIE) has adopted a number of color spaces called equal-contrast. The article presents the results of the study of color spaces adopted by CIE for equal contrast, i.e. on the differences in the thresholds of color differentiation in different areas of the color locus. The article investigated such color spaces as CIE 1931 (RGB), CIE 1931 (x, y), CIE 1960 (u, v), CIE 1976 (u*, v*), CIE LAB (a*, b*).


Sign in / Sign up

Export Citation Format

Share Document