Moving Object and Shadow Detection Based on RGB Color Space and Edge Ratio

Author(s):  
Xia Dong ◽  
Kedian Wang ◽  
Guohua Jia
2013 ◽  
Vol 703 ◽  
pp. 304-307
Author(s):  
Bao Dong Yan ◽  
Ying Yu

The aim of human mechanics is to reveal the mechanics properties of human motion. Especially, the purpose of human motion detection is detecting the moving people from continuous image sequences, extracting human body segments and then getting motion feature. The paper presents a shadow detection algorithm based on covariance difference operator based RGB color space and discusses its mechanics properties. The presented algorithm includes four steps: object detection, suspected shadow detection, shadow detection and post processing. The presented algorithm of adaptive shadow detection threshold is adopted to suppress the effect of shadow in moving object detection more effectively. The experiment results show the algorithm presented in this paper can detect shadow effectively.


2014 ◽  
Vol 596 ◽  
pp. 374-378
Author(s):  
Qi Lin Gai ◽  
Guo Qiang Wang

In the field of intelligent video surveillance and the multimedia applications we usually need to detect the moving object which is separated from the background. The results of the moving object detection would affect the subsequent identification, classification and tracking. Meanwhile shadow detection and suppression are also the important technology of the intelligent video surveillance. Because the moving object and shadow usually has the same behavioral characteristics, which has led to the errors of object recognition and tracking and affect the robustness of system seriously. This article studies the principle and algorithm of background subtraction, and has a detailed discussion and analysis. Shadow detection and suppression algorithms based on the YUV color space for processing. The experiment result shows that the algorithms for moving object detection with a better accuracy and stability of this paper.


Agriculture ◽  
2020 ◽  
Vol 11 (1) ◽  
pp. 6
Author(s):  
Ewa Ropelewska

The aim of this study was to evaluate the usefulness of the texture and geometric parameters of endocarp (pit) for distinguishing different cultivars of sweet cherries using image analysis. The textures from images converted to color channels and the geometric parameters of the endocarp (pits) of sweet cherry ‘Kordia’, ‘Lapins’, and ‘Büttner’s Red’ were calculated. For the set combining the selected textures from all color channels, the accuracy reached 100% when comparing ‘Kordia’ vs. ‘Lapins’ and ‘Kordia’ vs. ‘Büttner’s Red’ for all classifiers. The pits of ‘Kordia’ and ‘Lapins’, as well as ‘Kordia’ and ‘Büttner’s Red’ were also 100% correctly discriminated for discriminative models built separately for RGB, Lab and XYZ color spaces, G, L and Y color channels and for models combining selected textural and geometric features. For discrimination ‘Lapins’ and ‘Büttner’s Red’ pits, slightly lower accuracies were determined—up to 93% for models built based on textures selected from all color channels, 91% for the RGB color space, 92% for the Lab and XYZ color spaces, 84% for the G and L color channels, 83% for the Y channel, 94% for geometric features, and 96% for combined textural and geometric features.


2021 ◽  
Vol 13 (4) ◽  
pp. 699
Author(s):  
Tingting Zhou ◽  
Haoyang Fu ◽  
Chenglin Sun ◽  
Shenghan Wang

Due to the block of high-rise objects and the influence of the sun’s altitude and azimuth, shadows are inevitably formed in remote sensing images particularly in urban areas, which causes missing information in the shadow region. In this paper, we propose a new method for shadow detection and compensation through objected-based strategy. For shadow detection, the shadow was highlighted by an improved shadow index (ISI) combined color space with an NIR band, then ISI was reconstructed by the objects acquired from the mean-shift algorithm to weaken noise interference and improve integrity. Finally, threshold segmentation was applied to obtain the shadow mask. For shadow compensation, the objects from segmentation were treated as a minimum processing unit. The adjacent objects are likely to have the same ambient light intensity, based on which we put forward a shadow compensation method which always compensates shadow objects with their adjacent non-shadow objects. Furthermore, we presented a dynamic penumbra compensation method (DPCM) to define the penumbra scope and accurately remove the penumbra. Finally, the proposed methods were compared with the stated-of-art shadow indexes, shadow compensation method and penumbra compensation methods. The experiments show that the proposed method can accurately detect shadow from urban high-resolution remote sensing images with a complex background and can effectively compensate the information in the shadow region.


2021 ◽  
Vol 13 (6) ◽  
pp. 1211
Author(s):  
Pan Fan ◽  
Guodong Lang ◽  
Bin Yan ◽  
Xiaoyan Lei ◽  
Pengju Guo ◽  
...  

In recent years, many agriculture-related problems have been evaluated with the integration of artificial intelligence techniques and remote sensing systems. The rapid and accurate identification of apple targets in an illuminated and unstructured natural orchard is still a key challenge for the picking robot’s vision system. In this paper, by combining local image features and color information, we propose a pixel patch segmentation method based on gray-centered red–green–blue (RGB) color space to address this issue. Different from the existing methods, this method presents a novel color feature selection method that accounts for the influence of illumination and shadow in apple images. By exploring both color features and local variation in apple images, the proposed method could effectively distinguish the apple fruit pixels from other pixels. Compared with the classical segmentation methods and conventional clustering algorithms as well as the popular deep-learning segmentation algorithms, the proposed method can segment apple images more accurately and effectively. The proposed method was tested on 180 apple images. It offered an average accuracy rate of 99.26%, recall rate of 98.69%, false positive rate of 0.06%, and false negative rate of 1.44%. Experimental results demonstrate the outstanding performance of the proposed method.


2021 ◽  
Vol 13 (5) ◽  
pp. 939
Author(s):  
Yongan Xue ◽  
Jinling Zhao ◽  
Mingmei Zhang

To accurately extract cultivated land boundaries based on high-resolution remote sensing imagery, an improved watershed segmentation algorithm was proposed herein based on a combination of pre- and post-improvement procedures. Image contrast enhancement was used as the pre-improvement, while the color distance of the Commission Internationale de l´Eclairage (CIE) color space, including the Lab and Luv, was used as the regional similarity measure for region merging as the post-improvement. Furthermore, the area relative error criterion (δA), the pixel quantity error criterion (δP), and the consistency criterion (Khat) were used for evaluating the image segmentation accuracy. The region merging in Red–Green–Blue (RGB) color space was selected to compare the proposed algorithm by extracting cultivated land boundaries. The validation experiments were performed using a subset of Chinese Gaofen-2 (GF-2) remote sensing image with a coverage area of 0.12 km2. The results showed the following: (1) The contrast-enhanced image exhibited an obvious gain in terms of improving the image segmentation effect and time efficiency using the improved algorithm. The time efficiency increased by 10.31%, 60.00%, and 40.28%, respectively, in the RGB, Lab, and Luv color spaces. (2) The optimal segmentation and merging scale parameters in the RGB, Lab, and Luv color spaces were C for minimum areas of 2000, 1900, and 2000, and D for a color difference of 1000, 40, and 40. (3) The algorithm improved the time efficiency of cultivated land boundary extraction in the Lab and Luv color spaces by 35.16% and 29.58%, respectively, compared to the RGB color space. The extraction accuracy was compared to the RGB color space using the δA, δP, and Khat, that were improved by 76.92%, 62.01%, and 16.83%, respectively, in the Lab color space, while they were 55.79%, 49.67%, and 13.42% in the Luv color space. (4) Through the visual comparison, time efficiency, and segmentation accuracy, the comprehensive extraction effect using the proposed algorithm was obviously better than that of RGB color-based space algorithm. The established accuracy evaluation indicators were also proven to be consistent with the visual evaluation. (5) The proposed method has a satisfying transferability by a wider test area with a coverage area of 1 km2. In addition, the proposed method, based on the image contrast enhancement, was to perform the region merging in the CIE color space according to the simulated immersion watershed segmentation results. It is a useful attempt for the watershed segmentation algorithm to extract cultivated land boundaries, which provides a reference for enhancing the watershed algorithm.


Author(s):  
HUA YANG ◽  
MASAAKI KASHIMURA ◽  
NORIKADU ONDA ◽  
SHINJI OZAWA

This paper describes a new system for extracting and classifying bibliography regions from the color image of a book cover. The system consists of three major components: preprocessing, color space segmentation and text region extraction and classification. Preprocessing extracts the edge lines of the book and geometrically corrects and segments the input image, into the parts of front cover, spine and back cover. The same as all color image processing researches, the segmentation of color space is an essential and important step here. Instead of RGB color space, HSI color space is used in this system. The color space is segmented into achromatic and chromatic regions first; and both the achromatic and chromatic regions are segmented further to complete the color space segmentation. Then text region extraction and classification follow. After detecting fundamental features (stroke width and local label width) text regions are determined. By comparing the text regions on front cover with those on spine, all extracted text regions are classified into suitable bibliography categories: author, title, publisher and other information, without applying OCR.


Sign in / Sign up

Export Citation Format

Share Document