scholarly journals Pixel-Based Image Processing for CIE Standard Sky Classification through ANN

Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
D. Granados-López ◽  
A. García-Rodríguez ◽  
S. García-Rodríguez ◽  
A. Suárez-García ◽  
M. Díez-Mediavilla ◽  
...  

Digital sky images are studied for the definition of sky conditions in accordance with the CIE Standard General Sky Guide. Likewise, adequate image-processing methods are analyzed that highlight key image information, prior to the application of Artificial Neural Network classification algorithms. Twenty-two image-processing methods are reviewed and applied to a broad and unbiased dataset of 1500 sky images recorded in Burgos, Spain, over an extensive experimental campaign. The dataset comprises one hundred images of each CIE standard sky type, previously classified from simultaneous sky scanner data. Color spaces, spectral features, and texture filters image-processing methods are applied. While the use of the traditional RGB color space for image-processing yielded good results (ANN accuracy equal to 86.6%), other color spaces, such as Hue Saturation Value (HSV), which may be more appropriate, increased the accuracy of their global classifications. The use of either the green or the blue monochromatic channels improved sky classification, both for the fifteen CIE standard sky types and for simpler classification into clear, partial, and overcast conditions. The main conclusion was that specific image-processing methods could improve ANN-algorithm accuracy, depending on the image information required for the classification problem.

Agriculture ◽  
2020 ◽  
Vol 11 (1) ◽  
pp. 6
Author(s):  
Ewa Ropelewska

The aim of this study was to evaluate the usefulness of the texture and geometric parameters of endocarp (pit) for distinguishing different cultivars of sweet cherries using image analysis. The textures from images converted to color channels and the geometric parameters of the endocarp (pits) of sweet cherry ‘Kordia’, ‘Lapins’, and ‘Büttner’s Red’ were calculated. For the set combining the selected textures from all color channels, the accuracy reached 100% when comparing ‘Kordia’ vs. ‘Lapins’ and ‘Kordia’ vs. ‘Büttner’s Red’ for all classifiers. The pits of ‘Kordia’ and ‘Lapins’, as well as ‘Kordia’ and ‘Büttner’s Red’ were also 100% correctly discriminated for discriminative models built separately for RGB, Lab and XYZ color spaces, G, L and Y color channels and for models combining selected textural and geometric features. For discrimination ‘Lapins’ and ‘Büttner’s Red’ pits, slightly lower accuracies were determined—up to 93% for models built based on textures selected from all color channels, 91% for the RGB color space, 92% for the Lab and XYZ color spaces, 84% for the G and L color channels, 83% for the Y channel, 94% for geometric features, and 96% for combined textural and geometric features.


2021 ◽  
Vol 13 (5) ◽  
pp. 939
Author(s):  
Yongan Xue ◽  
Jinling Zhao ◽  
Mingmei Zhang

To accurately extract cultivated land boundaries based on high-resolution remote sensing imagery, an improved watershed segmentation algorithm was proposed herein based on a combination of pre- and post-improvement procedures. Image contrast enhancement was used as the pre-improvement, while the color distance of the Commission Internationale de l´Eclairage (CIE) color space, including the Lab and Luv, was used as the regional similarity measure for region merging as the post-improvement. Furthermore, the area relative error criterion (δA), the pixel quantity error criterion (δP), and the consistency criterion (Khat) were used for evaluating the image segmentation accuracy. The region merging in Red–Green–Blue (RGB) color space was selected to compare the proposed algorithm by extracting cultivated land boundaries. The validation experiments were performed using a subset of Chinese Gaofen-2 (GF-2) remote sensing image with a coverage area of 0.12 km2. The results showed the following: (1) The contrast-enhanced image exhibited an obvious gain in terms of improving the image segmentation effect and time efficiency using the improved algorithm. The time efficiency increased by 10.31%, 60.00%, and 40.28%, respectively, in the RGB, Lab, and Luv color spaces. (2) The optimal segmentation and merging scale parameters in the RGB, Lab, and Luv color spaces were C for minimum areas of 2000, 1900, and 2000, and D for a color difference of 1000, 40, and 40. (3) The algorithm improved the time efficiency of cultivated land boundary extraction in the Lab and Luv color spaces by 35.16% and 29.58%, respectively, compared to the RGB color space. The extraction accuracy was compared to the RGB color space using the δA, δP, and Khat, that were improved by 76.92%, 62.01%, and 16.83%, respectively, in the Lab color space, while they were 55.79%, 49.67%, and 13.42% in the Luv color space. (4) Through the visual comparison, time efficiency, and segmentation accuracy, the comprehensive extraction effect using the proposed algorithm was obviously better than that of RGB color-based space algorithm. The established accuracy evaluation indicators were also proven to be consistent with the visual evaluation. (5) The proposed method has a satisfying transferability by a wider test area with a coverage area of 1 km2. In addition, the proposed method, based on the image contrast enhancement, was to perform the region merging in the CIE color space according to the simulated immersion watershed segmentation results. It is a useful attempt for the watershed segmentation algorithm to extract cultivated land boundaries, which provides a reference for enhancing the watershed algorithm.


Author(s):  
Sumitra Kisan ◽  
Sarojananda Mishra ◽  
Ajay Chawda ◽  
Sanjay Nayak

This article describes how the term fractal dimension (FD) plays a vital role in fractal geometry. It is a degree that distinguishes the complexity and the irregularity of fractals, denoting the amount of space filled up. There are many procedures to evaluate the dimension for fractal surfaces, like box count, differential box count, and the improved differential box count method. These methods are basically used for grey scale images. The authors' objective in this article is to estimate the fractal dimension of color images using different color models. The authors have proposed a novel method for the estimation in CMY and HSV color spaces. In order to achieve the result, they performed test operation by taking number of color images in RGB color space. The authors have presented their experimental results and discussed the issues that characterize the approach. At the end, the authors have concluded the article with the analysis of calculated FDs for images with different color space.


2019 ◽  
Vol 2019 (1) ◽  
pp. 86-90
Author(s):  
Hakki Can Karaimer ◽  
Michael S. Brown

Most modern cameras allow captured images to be saved in two color spaces: (1) raw-RGB and (2) standard RGB (sRGB). The raw-RGB image represents a scene-referred sensor image whose RGB values are specific to the color sensitivities of the sensor's color filter array. The sRGB image represents a display-referred image that has been rendered through the camera's image signal processor (ISP). The rendering process involves several camera-specific photo-finishing manipulations intended to make the sRGB image visually pleasing. For applications that want to use a camera for purposes beyond photography, both the raw-RGB and sRGB color spaces are undesirable. For example, because the raw-RGB color space is dependent on the camera's sensor, it is challenging to develop applications that work across multiple cameras. Similarly, the camera-specific photo-finishing operations used to render sRGB images also hinder applications intended to run on different cameras. Interestingly, the ISP camera pipeline includes a colorimetric conversion stage where the raw-RGB images are converted to a device-independent color space. However, this image state is not accessible. In this paper, we advocate for the ability to access the colorimetric image state and recommend that cameras output a third image format that is based on this device-independent colorimetric space. To this end, we perform experiments to demonstrate that image pixel values in a colorimetric space are more similar across different makes and models than sRGB and raw-RGB.


2015 ◽  
pp. 1233-1245
Author(s):  
T. Chandrakanth ◽  
B. Sandhya

Advances in imaging and computing hardware have led to an explosion in the use of color images in image processing, graphics and computer vision applications across various domains such as medical imaging, satellite imagery, document analysis and biometrics to name a few. However, these images are subjected to a wide variety of distortions during its acquisition, subsequent compression, transmission, processing and then reproduction, which degrade their visual quality. Hence objective quality assessment of color images has emerged as one of the essential operations in image processing. During the last two decades, efforts have been put to design such an image quality metric which can be calculated simply but can accurately reflect subjective quality of human perception. In this paper, the authors evaluated the quality assessment of color images using SSIM (structural similarity index) metric across various color spaces. They experimented to study the effect of color spaces in metric based and distance based quality assessment. The authors proposed a metric using CIE Lab color space and SSIM, which has better correlation to the subjective assessment in a benchmark dataset.


2013 ◽  
pp. 112-128
Author(s):  
Ramón Moreno ◽  
Manuel Graña ◽  
Kurosh Madani

The representation of the RGB color space points in spherical coordinates allows to retain the chromatic components of image pixel colors, pulling apart easily the intensity component. This representation allows the definition of a chromatic distance and a hybrid gradient with good properties of perceptual color constancy. In this chapter, the authors present a watershed based image segmentation method using this hybrid gradient. Oversegmentation is solved by applying a region merging strategy based on the chromatic distance defined on the spherical coordinate representation. The chapter shows the robustness and performance of the approach on well known test images and the Berkeley benchmarking image database and on images taken with a NAO robot.


Author(s):  
Felicia Anisoara Damian ◽  
Simona Moldovanu ◽  
Luminita Moraru

This study aims to investigate the ability of an artificial neural network to differentiate between malign and benign skin lesions based on two statistics terms and for RGB (R red, G green, B blue) and YIQ (Y luminance, and I and Q chromatic differences) color spaces. The targeted statistics texture features are skewness (S) and kurtosis (K) which are extracted from the histograms of each color channel corresponding to the color spaces and for the two classes of lesions: nevi and melanomas. The extracted data is used to train the Feed-Forward Back Propagation Networks (FFBPNs). The number of neurons in the hidden layer varies: it can be 8, 16, 24, or 32. The results indicate skewness features computed for the red channel in the RGB color space as the best choice to reach the goal of our study. The reported result shows the advantages of monochrome channels representation for skin lesions diagnosis.


Sign in / Sign up

Export Citation Format

Share Document