scholarly journals A Watershed-Segmentation-Based Improved Algorithm for Extracting Cultivated Land Boundaries

2021 ◽  
Vol 13 (5) ◽  
pp. 939
Author(s):  
Yongan Xue ◽  
Jinling Zhao ◽  
Mingmei Zhang

To accurately extract cultivated land boundaries based on high-resolution remote sensing imagery, an improved watershed segmentation algorithm was proposed herein based on a combination of pre- and post-improvement procedures. Image contrast enhancement was used as the pre-improvement, while the color distance of the Commission Internationale de l´Eclairage (CIE) color space, including the Lab and Luv, was used as the regional similarity measure for region merging as the post-improvement. Furthermore, the area relative error criterion (δA), the pixel quantity error criterion (δP), and the consistency criterion (Khat) were used for evaluating the image segmentation accuracy. The region merging in Red–Green–Blue (RGB) color space was selected to compare the proposed algorithm by extracting cultivated land boundaries. The validation experiments were performed using a subset of Chinese Gaofen-2 (GF-2) remote sensing image with a coverage area of 0.12 km2. The results showed the following: (1) The contrast-enhanced image exhibited an obvious gain in terms of improving the image segmentation effect and time efficiency using the improved algorithm. The time efficiency increased by 10.31%, 60.00%, and 40.28%, respectively, in the RGB, Lab, and Luv color spaces. (2) The optimal segmentation and merging scale parameters in the RGB, Lab, and Luv color spaces were C for minimum areas of 2000, 1900, and 2000, and D for a color difference of 1000, 40, and 40. (3) The algorithm improved the time efficiency of cultivated land boundary extraction in the Lab and Luv color spaces by 35.16% and 29.58%, respectively, compared to the RGB color space. The extraction accuracy was compared to the RGB color space using the δA, δP, and Khat, that were improved by 76.92%, 62.01%, and 16.83%, respectively, in the Lab color space, while they were 55.79%, 49.67%, and 13.42% in the Luv color space. (4) Through the visual comparison, time efficiency, and segmentation accuracy, the comprehensive extraction effect using the proposed algorithm was obviously better than that of RGB color-based space algorithm. The established accuracy evaluation indicators were also proven to be consistent with the visual evaluation. (5) The proposed method has a satisfying transferability by a wider test area with a coverage area of 1 km2. In addition, the proposed method, based on the image contrast enhancement, was to perform the region merging in the CIE color space according to the simulated immersion watershed segmentation results. It is a useful attempt for the watershed segmentation algorithm to extract cultivated land boundaries, which provides a reference for enhancing the watershed algorithm.

2020 ◽  
Vol 9 (4) ◽  
pp. 246
Author(s):  
Mingmei Zhang ◽  
Yongan Xue ◽  
Yonghui Ge ◽  
Jinling Zhao

To accurately identify slope hazards based on high-resolution remote sensing imagery, an improved watershed segmentation algorithm is proposed. The color difference of the Luv color space was used as the regional similarity measure for region merging. Furthermore, the area relative error for evaluating the image segmentation accuracy was improved and supplemented with the pixel quantity error to evaluate the segmentation accuracy. An unstable slope was identified to validate the algorithm on Chinese Gaofen-2 (GF-2) remote sensing imagery by a multiscale segmentation extraction experiment. The results show the following: (1) the optimal segmentation and merging scale parameters were, respectively, minimum threshold constant C for minimum area Amin of 500 and optimal threshold D for a color difference of 400. (2) The total processing time for segmentation and merging of unstable slopes was 39.702 s, much lower than the maximum likelihood classification method and a little more than the object-oriented classification method. The relative error of the slope hazard area was 4.92% and the pixel quantity error was 1.60%, which were superior to the two classification methods. (3) The evaluation criteria of segmentation accuracy were consistent with the results of visual interpretation and the confusion matrix, indicating that the criteria established in this study are reliable. By comparing the time efficiency, visual effect and classification accuracies, the proposed method has a good comprehensive extraction effect. It can provide a technical reference for promoting the rapid extraction of slope hazards based on remote sensing imagery. Meanwhile, it also provides a theoretical and practical experience reference for improving the watershed segmentation algorithm.


Agriculture ◽  
2020 ◽  
Vol 11 (1) ◽  
pp. 6
Author(s):  
Ewa Ropelewska

The aim of this study was to evaluate the usefulness of the texture and geometric parameters of endocarp (pit) for distinguishing different cultivars of sweet cherries using image analysis. The textures from images converted to color channels and the geometric parameters of the endocarp (pits) of sweet cherry ‘Kordia’, ‘Lapins’, and ‘Büttner’s Red’ were calculated. For the set combining the selected textures from all color channels, the accuracy reached 100% when comparing ‘Kordia’ vs. ‘Lapins’ and ‘Kordia’ vs. ‘Büttner’s Red’ for all classifiers. The pits of ‘Kordia’ and ‘Lapins’, as well as ‘Kordia’ and ‘Büttner’s Red’ were also 100% correctly discriminated for discriminative models built separately for RGB, Lab and XYZ color spaces, G, L and Y color channels and for models combining selected textural and geometric features. For discrimination ‘Lapins’ and ‘Büttner’s Red’ pits, slightly lower accuracies were determined—up to 93% for models built based on textures selected from all color channels, 91% for the RGB color space, 92% for the Lab and XYZ color spaces, 84% for the G and L color channels, 83% for the Y channel, 94% for geometric features, and 96% for combined textural and geometric features.


2014 ◽  
Vol 543-547 ◽  
pp. 2484-2487
Author(s):  
Jing Zhang ◽  
Wei Dong ◽  
Jian Xin Wang ◽  
Xu Ning Liu

Aiming at the problem of poor image contrast and low visibility, a single image contrast enhancement method is put forward in this paper. The method is based on Dark-object subtraction technique, translating the fog degraded image from RGB color space to YIQ color space, and taking out the Y component. Then using the maximum entropy method to get the threshold value of image segmentation, we can put different portion of the image according to the different formula for image restoration. The processed image must be converted from YIQ color space to RGB color space In the back of the steps. Finally, the image needs a linear dynamic range adjustment to enhance the contrast and brightness. Experiments show that the method can effectively remove haze effect on the image. The dehazing effect of the processed image is obvious. The image becomes clear and bright, and the details is outstanding, which is convenient for observation and analysis.


Author(s):  
Sumitra Kisan ◽  
Sarojananda Mishra ◽  
Ajay Chawda ◽  
Sanjay Nayak

This article describes how the term fractal dimension (FD) plays a vital role in fractal geometry. It is a degree that distinguishes the complexity and the irregularity of fractals, denoting the amount of space filled up. There are many procedures to evaluate the dimension for fractal surfaces, like box count, differential box count, and the improved differential box count method. These methods are basically used for grey scale images. The authors' objective in this article is to estimate the fractal dimension of color images using different color models. The authors have proposed a novel method for the estimation in CMY and HSV color spaces. In order to achieve the result, they performed test operation by taking number of color images in RGB color space. The authors have presented their experimental results and discussed the issues that characterize the approach. At the end, the authors have concluded the article with the analysis of calculated FDs for images with different color space.


2019 ◽  
Vol 2019 (1) ◽  
pp. 86-90
Author(s):  
Hakki Can Karaimer ◽  
Michael S. Brown

Most modern cameras allow captured images to be saved in two color spaces: (1) raw-RGB and (2) standard RGB (sRGB). The raw-RGB image represents a scene-referred sensor image whose RGB values are specific to the color sensitivities of the sensor's color filter array. The sRGB image represents a display-referred image that has been rendered through the camera's image signal processor (ISP). The rendering process involves several camera-specific photo-finishing manipulations intended to make the sRGB image visually pleasing. For applications that want to use a camera for purposes beyond photography, both the raw-RGB and sRGB color spaces are undesirable. For example, because the raw-RGB color space is dependent on the camera's sensor, it is challenging to develop applications that work across multiple cameras. Similarly, the camera-specific photo-finishing operations used to render sRGB images also hinder applications intended to run on different cameras. Interestingly, the ISP camera pipeline includes a colorimetric conversion stage where the raw-RGB images are converted to a device-independent color space. However, this image state is not accessible. In this paper, we advocate for the ability to access the colorimetric image state and recommend that cameras output a third image format that is based on this device-independent colorimetric space. To this end, we perform experiments to demonstrate that image pixel values in a colorimetric space are more similar across different makes and models than sRGB and raw-RGB.


Author(s):  
Felicia Anisoara Damian ◽  
Simona Moldovanu ◽  
Luminita Moraru

This study aims to investigate the ability of an artificial neural network to differentiate between malign and benign skin lesions based on two statistics terms and for RGB (R red, G green, B blue) and YIQ (Y luminance, and I and Q chromatic differences) color spaces. The targeted statistics texture features are skewness (S) and kurtosis (K) which are extracted from the histograms of each color channel corresponding to the color spaces and for the two classes of lesions: nevi and melanomas. The extracted data is used to train the Feed-Forward Back Propagation Networks (FFBPNs). The number of neurons in the hidden layer varies: it can be 8, 16, 24, or 32. The results indicate skewness features computed for the red channel in the RGB color space as the best choice to reach the goal of our study. The reported result shows the advantages of monochrome channels representation for skin lesions diagnosis.


Author(s):  
Ewa Ropelewska ◽  
Krzysztof P. Rutkowski

AbstractThe peaches belonging to different cultivars can be characterized by differentiation in properties. The aim of this study was to evaluate the usefulness of individual parts of fruit (skin, flesh, stone and seed) for cultivar discrimination of peaches based on textures determined using image analysis. Discriminant analysis was performed using the classifiers of Bayes net, logistic, SMO, multi-class classifier and random forest based on a set of combined textures selected from all color channels R, G, B, L, a, b, X, Y, Z and for textures selected separately for RGB, Lab and XYZ color spaces. In the case of sets of textures selected from all color channels (R, G, B, L, a, b, X, Y, Z), the accuracy of 100% was observed for flesh, stones and seeds for selected classifiers. The sets of textures selected from RGB color space produced the correctness equal to 100% in the case of flesh and seeds of peaches. In the case of Lab and XYZ color spaces, slightly lower accuracies than for RGB color space were obtained and the accuracy reaching 100% was noted only for the discrimination of seeds of peaches. The research proved the usefulness of selected texture parameters of fruit flesh, stones and seeds for successful discrimination of peach cultivars with an accuracy of 100%. The distinguishing between cultivars may be important for breeders, consumers and the peach industry for ensuring adequate processing conditions and equipment parameters. The cultivar identification of fruit by human may be characterized by large errors. The molecular or chemical methods may require special equipment or be time-consuming. The image analysis may ensure objective, rapid and relatively inexpensive procedure and high accuracy for peach cultivar discrimination.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
D. Granados-López ◽  
A. García-Rodríguez ◽  
S. García-Rodríguez ◽  
A. Suárez-García ◽  
M. Díez-Mediavilla ◽  
...  

Digital sky images are studied for the definition of sky conditions in accordance with the CIE Standard General Sky Guide. Likewise, adequate image-processing methods are analyzed that highlight key image information, prior to the application of Artificial Neural Network classification algorithms. Twenty-two image-processing methods are reviewed and applied to a broad and unbiased dataset of 1500 sky images recorded in Burgos, Spain, over an extensive experimental campaign. The dataset comprises one hundred images of each CIE standard sky type, previously classified from simultaneous sky scanner data. Color spaces, spectral features, and texture filters image-processing methods are applied. While the use of the traditional RGB color space for image-processing yielded good results (ANN accuracy equal to 86.6%), other color spaces, such as Hue Saturation Value (HSV), which may be more appropriate, increased the accuracy of their global classifications. The use of either the green or the blue monochromatic channels improved sky classification, both for the fifteen CIE standard sky types and for simpler classification into clear, partial, and overcast conditions. The main conclusion was that specific image-processing methods could improve ANN-algorithm accuracy, depending on the image information required for the classification problem.


2012 ◽  
Vol 22 ◽  
pp. 21-26 ◽  
Author(s):  
Jonathan Cepeda-Negrete ◽  
Raul E. Sanchez-Yanez

Color constancy is an important process in a number of vision tasks. Most devices for capturing images operate on the RGB color space and, usually, the processing of the images is in this space, although some processes have shown a better performance when a perceptual color space is used instead. In this paper, experiments on the White Patch Retinex, a color constancy algorithm commonly used, are performed in two color spaces, RGB and CIELAB, for comparison purposes. Experimental results using an imagery set are analyzed using a no-reference quality metric and outcomes are discussed. It has been found that the White Patch Retinex algorithm shows a better performance in RGB than in CIELAB, but when color adjustments are implemented in sequence, firstly in CIELAB and then in RGB, much better results are obtained.


2019 ◽  
Vol 2019 (1) ◽  
pp. 153-158
Author(s):  
Lindsay MacDonald

We investigated how well a multilayer neural network could implement the mapping between two trichromatic color spaces, specifically from camera R,G,B to tristimulus X,Y,Z. For training the network, a set of 800,000 synthetic reflectance spectra was generated. For testing the network, a set of 8,714 real reflectance spectra was collated from instrumental measurements on textiles, paints and natural materials. Various network architectures were tested, with both linear and sigmoidal activations. Results show that over 85% of all test samples had color errors of less than 1.0 ΔE2000 units, much more accurate than could be achieved by regression.


Sign in / Sign up

Export Citation Format

Share Document