scholarly journals Color space influence on ANN skin lesion classification using statistics texture feature

Author(s):  
Felicia Anisoara Damian ◽  
Simona Moldovanu ◽  
Luminita Moraru

This study aims to investigate the ability of an artificial neural network to differentiate between malign and benign skin lesions based on two statistics terms and for RGB (R red, G green, B blue) and YIQ (Y luminance, and I and Q chromatic differences) color spaces. The targeted statistics texture features are skewness (S) and kurtosis (K) which are extracted from the histograms of each color channel corresponding to the color spaces and for the two classes of lesions: nevi and melanomas. The extracted data is used to train the Feed-Forward Back Propagation Networks (FFBPNs). The number of neurons in the hidden layer varies: it can be 8, 16, 24, or 32. The results indicate skewness features computed for the red channel in the RGB color space as the best choice to reach the goal of our study. The reported result shows the advantages of monochrome channels representation for skin lesions diagnosis.

Agriculture ◽  
2020 ◽  
Vol 11 (1) ◽  
pp. 6
Author(s):  
Ewa Ropelewska

The aim of this study was to evaluate the usefulness of the texture and geometric parameters of endocarp (pit) for distinguishing different cultivars of sweet cherries using image analysis. The textures from images converted to color channels and the geometric parameters of the endocarp (pits) of sweet cherry ‘Kordia’, ‘Lapins’, and ‘Büttner’s Red’ were calculated. For the set combining the selected textures from all color channels, the accuracy reached 100% when comparing ‘Kordia’ vs. ‘Lapins’ and ‘Kordia’ vs. ‘Büttner’s Red’ for all classifiers. The pits of ‘Kordia’ and ‘Lapins’, as well as ‘Kordia’ and ‘Büttner’s Red’ were also 100% correctly discriminated for discriminative models built separately for RGB, Lab and XYZ color spaces, G, L and Y color channels and for models combining selected textural and geometric features. For discrimination ‘Lapins’ and ‘Büttner’s Red’ pits, slightly lower accuracies were determined—up to 93% for models built based on textures selected from all color channels, 91% for the RGB color space, 92% for the Lab and XYZ color spaces, 84% for the G and L color channels, 83% for the Y channel, 94% for geometric features, and 96% for combined textural and geometric features.


2021 ◽  
Vol 13 (5) ◽  
pp. 939
Author(s):  
Yongan Xue ◽  
Jinling Zhao ◽  
Mingmei Zhang

To accurately extract cultivated land boundaries based on high-resolution remote sensing imagery, an improved watershed segmentation algorithm was proposed herein based on a combination of pre- and post-improvement procedures. Image contrast enhancement was used as the pre-improvement, while the color distance of the Commission Internationale de l´Eclairage (CIE) color space, including the Lab and Luv, was used as the regional similarity measure for region merging as the post-improvement. Furthermore, the area relative error criterion (δA), the pixel quantity error criterion (δP), and the consistency criterion (Khat) were used for evaluating the image segmentation accuracy. The region merging in Red–Green–Blue (RGB) color space was selected to compare the proposed algorithm by extracting cultivated land boundaries. The validation experiments were performed using a subset of Chinese Gaofen-2 (GF-2) remote sensing image with a coverage area of 0.12 km2. The results showed the following: (1) The contrast-enhanced image exhibited an obvious gain in terms of improving the image segmentation effect and time efficiency using the improved algorithm. The time efficiency increased by 10.31%, 60.00%, and 40.28%, respectively, in the RGB, Lab, and Luv color spaces. (2) The optimal segmentation and merging scale parameters in the RGB, Lab, and Luv color spaces were C for minimum areas of 2000, 1900, and 2000, and D for a color difference of 1000, 40, and 40. (3) The algorithm improved the time efficiency of cultivated land boundary extraction in the Lab and Luv color spaces by 35.16% and 29.58%, respectively, compared to the RGB color space. The extraction accuracy was compared to the RGB color space using the δA, δP, and Khat, that were improved by 76.92%, 62.01%, and 16.83%, respectively, in the Lab color space, while they were 55.79%, 49.67%, and 13.42% in the Luv color space. (4) Through the visual comparison, time efficiency, and segmentation accuracy, the comprehensive extraction effect using the proposed algorithm was obviously better than that of RGB color-based space algorithm. The established accuracy evaluation indicators were also proven to be consistent with the visual evaluation. (5) The proposed method has a satisfying transferability by a wider test area with a coverage area of 1 km2. In addition, the proposed method, based on the image contrast enhancement, was to perform the region merging in the CIE color space according to the simulated immersion watershed segmentation results. It is a useful attempt for the watershed segmentation algorithm to extract cultivated land boundaries, which provides a reference for enhancing the watershed algorithm.


Author(s):  
Sumitra Kisan ◽  
Sarojananda Mishra ◽  
Ajay Chawda ◽  
Sanjay Nayak

This article describes how the term fractal dimension (FD) plays a vital role in fractal geometry. It is a degree that distinguishes the complexity and the irregularity of fractals, denoting the amount of space filled up. There are many procedures to evaluate the dimension for fractal surfaces, like box count, differential box count, and the improved differential box count method. These methods are basically used for grey scale images. The authors' objective in this article is to estimate the fractal dimension of color images using different color models. The authors have proposed a novel method for the estimation in CMY and HSV color spaces. In order to achieve the result, they performed test operation by taking number of color images in RGB color space. The authors have presented their experimental results and discussed the issues that characterize the approach. At the end, the authors have concluded the article with the analysis of calculated FDs for images with different color space.


2013 ◽  
Vol 846-847 ◽  
pp. 1266-1269
Author(s):  
Meng Zhao ◽  
Zhan Ping Li

Segmentation of motion in an image sequence is one of the most challenging problems in image processing. In image analysis engineering, accurate statistics of sport image is one of the most important subjects, while at the same time one that finds numerous applications. In this paper, we propose a robust multi-layer background subtraction technique which takes advantages of local texture features represented by local binary patterns (LBP) and photometric invariant color measurements in RGB color space. Due to the use of a simple layer-based strategy, the approach can model moving background pixels with quasiperiodic flickering as well as background scenes which may vary over time due to the addition and removal of long-time stationary objects, which plays an important role in optimizing the growth conditions of sport image. Segmentation of sport images is successfully realized by means of multi-layer background subtraction method and then the sport image is computed precisely.


Plant Methods ◽  
2021 ◽  
Vol 17 (1) ◽  
Author(s):  
Yukun Yang ◽  
Jing Nie ◽  
Za Kan ◽  
Shuo Yang ◽  
Hangxing Zhao ◽  
...  

Abstract Background At present, the residual film pollution in cotton fields is crucial. The commonly used recycling method is the manual-driven recycling machine, which is heavy and time-consuming. The development of a visual navigation system for the recovery of residual film is conducive, in order to improve the work efficiency. The key technology in the visual navigation system is the cotton stubble detection. A successful cotton stubble detection can ensure the stability and reliability of the visual navigation system. Methods Firstly, it extracts the three types of texture features of GLCM, GLRLM and LBP, from the three types of images of stubbles, residual films and broken leaves between rows. It then builds three classifiers: Random Forest, Back Propagation Neural Network and Support Vector Machine in order to classify the sample images. Finally, the possibility of improving the classification accuracy using the texture features extracted from the wavelet decomposition coefficients, is discussed. Results The experiment proves that the GLCM texture feature of the original image has the best performance under the Back Propagation Neural Network classifier. As for the different wavelet bases, the vertical coefficient texture feature of coif3 wavelet decomposition, combined with the texture feature of the original image, is the feature having the best classification effect. Compared with the original image texture features, the classification accuracy is increased by 3.8%, the sensitivity is increased by 4.8%, and the specificity is increased by 1.2%. Conclusions The algorithm can complete the task of stubble detection in different locations, different periods and abnormal driving conditions, which shows that the wavelet coefficient texture feature combined with the original image texture feature is a useful fusion feature for detecting stubble and can provide a reference for different crop stubble detection.


2020 ◽  
Vol 10 (4) ◽  
pp. 5986-5991
Author(s):  
A. N. Saeed

Artificial Intelligence (AI) based Machine Learning (ML) is gaining more attention from researchers. In ophthalmology, ML has been applied to fundus photographs, achieving robust classification performance in the detection of diseases such as diabetic retinopathy, retinopathy of prematurity, etc. The detection and extraction of blood vessels in the retina is an essential part of various diagnosing problems associated with eyes, such as diabetic retinopathy. This paper proposes a novel machine learning approach to segment the retinal blood vessels from eye fundus images using a combination of color features, texture features, and Back Propagation Neural Networks (BPNN). The proposed method comprises of two steps, namely the color texture feature extraction and training the BPNN to get the segmented retinal nerves. Magenta color and correlation-texture features are given as input to the BPNN. The system was trained and tested in retinal fundus images taken from two distinct databases. The average sensitivity, specificity, and accuracy obtained for the segmentation of retinal blood vessels were 0.470%, 0.914%, and 0.903% respectively. Results obtained reveal that the proposed methodology is excellent in automated segmentation retinal nerves. The proposed segmentation methodology was able to obtain comparable accuracy with other methods.


2021 ◽  
Vol 13 (2) ◽  
pp. 103-114
Author(s):  
Yongzhen Ke ◽  
Yiping Cui

Tampering with images may involve the field of crime and also bring problems such as incorrect values to the public. Image local deformation is one of the most common image tampering methods, where the original texture features and the correlation between the pixels of an image are changed. Multiple fusion strategies based on first-order difference images and their texture feature is proposed to locate the tamper in local deformation image. Firstly, texture features using overlapping blocks on one color channel are extracted and fed into fuzzy c-means clustering method to generate a tamper probability map (TPM), and then several TPMs with different block sizes are fused in the first fusion. Secondly, different TPMs with different color channels and different texture features are respectively fused in the second and third fusion. The experimental results show that the proposed method can accurately detect the location of the local deformation of an image.


2019 ◽  
Vol 2019 (1) ◽  
pp. 86-90
Author(s):  
Hakki Can Karaimer ◽  
Michael S. Brown

Most modern cameras allow captured images to be saved in two color spaces: (1) raw-RGB and (2) standard RGB (sRGB). The raw-RGB image represents a scene-referred sensor image whose RGB values are specific to the color sensitivities of the sensor's color filter array. The sRGB image represents a display-referred image that has been rendered through the camera's image signal processor (ISP). The rendering process involves several camera-specific photo-finishing manipulations intended to make the sRGB image visually pleasing. For applications that want to use a camera for purposes beyond photography, both the raw-RGB and sRGB color spaces are undesirable. For example, because the raw-RGB color space is dependent on the camera's sensor, it is challenging to develop applications that work across multiple cameras. Similarly, the camera-specific photo-finishing operations used to render sRGB images also hinder applications intended to run on different cameras. Interestingly, the ISP camera pipeline includes a colorimetric conversion stage where the raw-RGB images are converted to a device-independent color space. However, this image state is not accessible. In this paper, we advocate for the ability to access the colorimetric image state and recommend that cameras output a third image format that is based on this device-independent colorimetric space. To this end, we perform experiments to demonstrate that image pixel values in a colorimetric space are more similar across different makes and models than sRGB and raw-RGB.


2020 ◽  
Vol 2020 ◽  
pp. 1-10 ◽  
Author(s):  
Minghu Wu ◽  
Rui Chen ◽  
Ying Tong

Shadow detection and removal in real scene images are a significant problem for target detection. This work proposes an improved shadow detection and removal algorithm for urban video surveillance. First, the foreground is detected by background subtraction and the shadow is detected by HSV color space. Using local variance and OTSU method, we obtain the moving targets with texture features. According to the characteristics of shadow in HSV space and texture feature, the shadow is detected and removed to eliminate the shadow interference for the subsequent processing of moving targets. Finally, we embed our algorithm into C/S framework based on the HTML5 web socket protocol. Both the experimental and actual operation results show that the proposed algorithm is efficient and robust in target detection and shadow detection and removal under different scenes.


2013 ◽  
Vol 339 ◽  
pp. 265-268
Author(s):  
Ming Jing Lu ◽  
Zhan Ping Li

Segmentation of motion in an image sequence is one of the most challenging problems in image processing, while at the same time one that finds numerous applications. In this paper, we propose a robust multi-layer background subtraction technique which takes advantages of local texture features represented by local binary patterns (LBP) and photometric invariant color measurements in RGB color space. Due to the use of a simple layer-based strategy, the approach can model moving background pixels with quasiperiodic flickering as well as background scenes which may vary over time due to the addition and removal of long-time stationary objects. The use of a cross-bilateral filter allows to implicitly smooth detection results over regions of similar intensity and preserve object boundaries.


Sign in / Sign up

Export Citation Format

Share Document