scholarly journals An Improved Mixture-of-Gaussians Background Model with Frame Difference and Blob Tracking in Video Stream

2014 ◽  
Vol 2014 ◽  
pp. 1-9 ◽  
Author(s):  
Li Yao ◽  
Miaogen Ling

Modeling background and segmenting moving objects are significant techniques for computer vision applications. Mixture-of-Gaussians (MoG) background model is commonly used in foreground extraction in video steam. However considering the case that the objects enter the scenery and stay for a while, the foreground extraction would fail as the objects stay still and gradually merge into the background. In this paper, we adopt a blob tracking method to cope with this situation. To construct the MoG model more quickly, we add frame difference method to the foreground extracted from MoG for very crowded situations. What is more, a new shadow removal method based on RGB color space is proposed.

2012 ◽  
Vol 468-471 ◽  
pp. 2691-2694
Author(s):  
Zhi Li Qing ◽  
Yue Lin Chen

This paper studies the moving objects detect and shadow eliminate in video surveillance. Completed the background generated on the video image by study the mixed Gaussian background model, by transforming the image to hsv color space for processing, which achieve the elimination of shadows. The experimental results show the approach this paper use is effectively on the background generated and shadow remove.


Detection of Human is a vital and difficult task in computer vision applications like a police investigation, vehicle tracking, and human following. Human detection in video stream is very important in public security management. In such security related cases detecting an object in the video, sequences are very important to understand the behavior of moving objects which normally used in the background subtraction technique. The input data is preprocessed using a modified median filter and Haar transform. The region of interest is extracted using a background subtraction algorithm with remaining spikes removed using threshold technique. The proposed architecture is coded using standard VHDL language and performance is checked in the Spartan-6 FPGA board. The comparison result shows that the proposed architecture is better than the existing method in both hardware and image quality


2005 ◽  
Vol 02 (03) ◽  
pp. 227-239 ◽  
Author(s):  
YOUFU WU ◽  
MO DAI

In this paper, we address the problem of detection and analysis of moving objects in a video stream obtained by a fixed camera. To detect the moving objects, the tradition method is to create a fixed image first, which includes all the motionless parts of the scene, known as the background model. The difficulty of this approach lies mainly in two aspects: The first relates to the fact that a slow moving object can leave a visible trace in background model. The latter comes from the variation of illumination in the course of time so it cannot obtain a reasonable background model. To overcome these difficulties, we propose a multiple background model. At the exit of the detection of moving objects, the tracking (matching) of a moving object extracted in the successive images is necessary to analyze its behavior. After the matching of mobile objects, a series of analysis methods are presented. The proposed tracking and analysis methods allow dealing with partial occlusions, stopping and going motion, moving directions, crossing of moving object in very challenging situations. The experiment and comparison results are reported for different real sequences, which show better performance of our methods.


Agriculture ◽  
2020 ◽  
Vol 11 (1) ◽  
pp. 6
Author(s):  
Ewa Ropelewska

The aim of this study was to evaluate the usefulness of the texture and geometric parameters of endocarp (pit) for distinguishing different cultivars of sweet cherries using image analysis. The textures from images converted to color channels and the geometric parameters of the endocarp (pits) of sweet cherry ‘Kordia’, ‘Lapins’, and ‘Büttner’s Red’ were calculated. For the set combining the selected textures from all color channels, the accuracy reached 100% when comparing ‘Kordia’ vs. ‘Lapins’ and ‘Kordia’ vs. ‘Büttner’s Red’ for all classifiers. The pits of ‘Kordia’ and ‘Lapins’, as well as ‘Kordia’ and ‘Büttner’s Red’ were also 100% correctly discriminated for discriminative models built separately for RGB, Lab and XYZ color spaces, G, L and Y color channels and for models combining selected textural and geometric features. For discrimination ‘Lapins’ and ‘Büttner’s Red’ pits, slightly lower accuracies were determined—up to 93% for models built based on textures selected from all color channels, 91% for the RGB color space, 92% for the Lab and XYZ color spaces, 84% for the G and L color channels, 83% for the Y channel, 94% for geometric features, and 96% for combined textural and geometric features.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4819
Author(s):  
Yikang Li ◽  
Zhenzhou Wang

Single-shot 3D reconstruction technique is very important for measuring moving and deforming objects. After many decades of study, a great number of interesting single-shot techniques have been proposed, yet the problem remains open. In this paper, a new approach is proposed to reconstruct deforming and moving objects with the structured light RGB line pattern. The structured light RGB line pattern is coded using parallel red, green, and blue lines with equal intervals to facilitate line segmentation and line indexing. A slope difference distribution (SDD)-based image segmentation method is proposed to segment the lines robustly in the HSV color space. A method of exclusion is proposed to index the red lines, the green lines, and the blue lines respectively and robustly. The indexed lines in different colors are fused to obtain a phase map for 3D depth calculation. The quantitative accuracies of measuring a calibration grid and a ball achieved by the proposed approach are 0.46 and 0.24 mm, respectively, which are significantly lower than those achieved by the compared state-of-the-art single-shot techniques.


2021 ◽  
Vol 13 (6) ◽  
pp. 1211
Author(s):  
Pan Fan ◽  
Guodong Lang ◽  
Bin Yan ◽  
Xiaoyan Lei ◽  
Pengju Guo ◽  
...  

In recent years, many agriculture-related problems have been evaluated with the integration of artificial intelligence techniques and remote sensing systems. The rapid and accurate identification of apple targets in an illuminated and unstructured natural orchard is still a key challenge for the picking robot’s vision system. In this paper, by combining local image features and color information, we propose a pixel patch segmentation method based on gray-centered red–green–blue (RGB) color space to address this issue. Different from the existing methods, this method presents a novel color feature selection method that accounts for the influence of illumination and shadow in apple images. By exploring both color features and local variation in apple images, the proposed method could effectively distinguish the apple fruit pixels from other pixels. Compared with the classical segmentation methods and conventional clustering algorithms as well as the popular deep-learning segmentation algorithms, the proposed method can segment apple images more accurately and effectively. The proposed method was tested on 180 apple images. It offered an average accuracy rate of 99.26%, recall rate of 98.69%, false positive rate of 0.06%, and false negative rate of 1.44%. Experimental results demonstrate the outstanding performance of the proposed method.


2021 ◽  
Vol 13 (5) ◽  
pp. 939
Author(s):  
Yongan Xue ◽  
Jinling Zhao ◽  
Mingmei Zhang

To accurately extract cultivated land boundaries based on high-resolution remote sensing imagery, an improved watershed segmentation algorithm was proposed herein based on a combination of pre- and post-improvement procedures. Image contrast enhancement was used as the pre-improvement, while the color distance of the Commission Internationale de l´Eclairage (CIE) color space, including the Lab and Luv, was used as the regional similarity measure for region merging as the post-improvement. Furthermore, the area relative error criterion (δA), the pixel quantity error criterion (δP), and the consistency criterion (Khat) were used for evaluating the image segmentation accuracy. The region merging in Red–Green–Blue (RGB) color space was selected to compare the proposed algorithm by extracting cultivated land boundaries. The validation experiments were performed using a subset of Chinese Gaofen-2 (GF-2) remote sensing image with a coverage area of 0.12 km2. The results showed the following: (1) The contrast-enhanced image exhibited an obvious gain in terms of improving the image segmentation effect and time efficiency using the improved algorithm. The time efficiency increased by 10.31%, 60.00%, and 40.28%, respectively, in the RGB, Lab, and Luv color spaces. (2) The optimal segmentation and merging scale parameters in the RGB, Lab, and Luv color spaces were C for minimum areas of 2000, 1900, and 2000, and D for a color difference of 1000, 40, and 40. (3) The algorithm improved the time efficiency of cultivated land boundary extraction in the Lab and Luv color spaces by 35.16% and 29.58%, respectively, compared to the RGB color space. The extraction accuracy was compared to the RGB color space using the δA, δP, and Khat, that were improved by 76.92%, 62.01%, and 16.83%, respectively, in the Lab color space, while they were 55.79%, 49.67%, and 13.42% in the Luv color space. (4) Through the visual comparison, time efficiency, and segmentation accuracy, the comprehensive extraction effect using the proposed algorithm was obviously better than that of RGB color-based space algorithm. The established accuracy evaluation indicators were also proven to be consistent with the visual evaluation. (5) The proposed method has a satisfying transferability by a wider test area with a coverage area of 1 km2. In addition, the proposed method, based on the image contrast enhancement, was to perform the region merging in the CIE color space according to the simulated immersion watershed segmentation results. It is a useful attempt for the watershed segmentation algorithm to extract cultivated land boundaries, which provides a reference for enhancing the watershed algorithm.


Author(s):  
HUA YANG ◽  
MASAAKI KASHIMURA ◽  
NORIKADU ONDA ◽  
SHINJI OZAWA

This paper describes a new system for extracting and classifying bibliography regions from the color image of a book cover. The system consists of three major components: preprocessing, color space segmentation and text region extraction and classification. Preprocessing extracts the edge lines of the book and geometrically corrects and segments the input image, into the parts of front cover, spine and back cover. The same as all color image processing researches, the segmentation of color space is an essential and important step here. Instead of RGB color space, HSI color space is used in this system. The color space is segmented into achromatic and chromatic regions first; and both the achromatic and chromatic regions are segmented further to complete the color space segmentation. Then text region extraction and classification follow. After detecting fundamental features (stroke width and local label width) text regions are determined. By comparing the text regions on front cover with those on spine, all extracted text regions are classified into suitable bibliography categories: author, title, publisher and other information, without applying OCR.


2015 ◽  
Vol 734 ◽  
pp. 203-206
Author(s):  
En Zeng Dong ◽  
Sheng Xu Yan ◽  
Kui Xiang Wei

In order to enhance the rapidity and the accuracy of moving target detection and tracking, and improve the speed of the algorithm on the DSP (digital signal processor), an active visual tracking system was designed based on the gaussian mixture background model and Meanshift algorithm on DM6437. The system use the VLIB library developed by TI, and through the method of gaussian mixture background model to detect the moving objects and use the Meanshift tracking algorithm based on color features to track the target in RGB space. Finally, the system is tested on the hardware platform, and the system is verified to be quickness and accuracy.


Sign in / Sign up

Export Citation Format

Share Document