Moving Object Detection and Tracking Using Particle Filter

2013 ◽  
Vol 321-324 ◽  
pp. 1200-1204 ◽  
Author(s):  
M.M. Naushad Ali ◽  
M. Abdullah-Al-Wadud ◽  
Seok Lyong Lee

Moving human detection and tracking are challenging tasks in computer vision. Human motion is usually non-linear and non-Gaussian, and thus many common algorithms are not appropriate for tracking. In this paper we propose a robust tracking algorithm based on particle filter. Multiple moving human in a video sequence are detected using frame difference and morphological operation. Then feature points of every person are extracted using a Harris Corner detection algorithm. Finally, Histogram of Oriented Gradient (HOG) is calculated for each feature point and feature points of the corresponding person are tracked using particle filter. Experimental results demonstrate that our method is efficient to improve the performance of tracking.

2014 ◽  
Vol 615 ◽  
pp. 158-164
Author(s):  
Liang Sun ◽  
Jian Chun Xing ◽  
Shuang Qing Wang ◽  
Shi Qiang Wang

In order to effectively inhibit the image dithering caused by wind-induced vibration in the security monitoring system, it calls for the extraction and match of the feature points of the sequential frames. Harris corner detection algorithm is a widely-employed characteristics extraction algorithm in the image processing. In the security monitoring field, images and videos photographed are characterized by large scale, high pixel and low contrast degree. The classical algorithm often fails to effectively obtain the feature points while handling the images and videos of the kind. Concerning the above problems, this paper puts forward an improved self-adaptive corner detection algorithm. Firstly, this paper employs the self-adaptive gray threshold comparative results of the of every point with the surrounding eight neighborhood points to select the preselected points of part of the corners. Following that, this paper classifies the preselected points into three types according to certain rules and the value of the already selected self-adaptive gray threshold. At last, according to the classification results, this paper uses different corners to test function threshold and the preselected points as well to eliminate the peripheral points and the pseudo-corners so as to gain the genuine corners. After verifying the above improved algorithm in the practical scenario in the security monitoring, the results of this paper prove its effectiveness, feasibility and its advantages in terms of robustness.


2012 ◽  
Vol 605-607 ◽  
pp. 2227-2231
Author(s):  
Wu Yang Ding ◽  
Ling Zhang ◽  
Yun Hua Chen

A yawning detection method which can be used in drivers’ fatigue monitoring is proposed. To adapt to the variance in different mouth shapes and sizes, it based on mouth inner contour corner detection and curve fitting. First, the Harris corner detection algorithm was used to detect inner mouth feature points. Second, we established the open mouths’ mathematical model by curve fitting those points, calculated the degree of mouth openness using the mouth model, and generated the real-time M-curve. Third, the duration of big openness in successive images is divided into levels for further judgment. The validation results show that the method can obtain more precise mouth parameters and distinguish yawn from complex mouth activities. So the method achieves a higher level of accuracy.


2014 ◽  
Vol 936 ◽  
pp. 2263-2266
Author(s):  
Wan Bing Li ◽  
Hong Wei Quan ◽  
Xia Fei Huang

To match two or more images originated from the same scenario, a new fast automatic registration algorithm based on sparse feature point extraction is proposed. At the first step, the improved Harris corner detection algorithm is used to get two sets of feature points from the reference image and registration image. Second, a group of sparse feature points are selected from the reference image set as initial control points. Then, the corresponding matching points in the registration image set are searched based on local moment invariant similarity detection. Experimental results demonstrate that this method is fast and efficient.


2012 ◽  
Vol 6-7 ◽  
pp. 717-721 ◽  
Author(s):  
Zhao Yang Zeng ◽  
Zhi Qiang Jiang ◽  
Qiang Chen ◽  
Pan Feng He

In order to accurately extract corners from the image with high texture complexity, the paper analyzed the traditional corner detection algorithm based on gray value of image. Although Harris corner detection algorithm has higher accuracy, but there also exists the following problems: extracting false corners, the information of the corners is missing and computation time is a bit long. So an improved corner detection algorithm combined Harris with SUSAN corner detection algorithm is proposed, the new algorithm first use the Harris to detect corners of image, then use the SUSAN to eliminate the false corners. By comparing the test results show that the new algorithm to extract corners very effective, and better than the Harris algorithm in the performance of corner detection.


2021 ◽  
pp. 335-344
Author(s):  
Yusong Chen ◽  
Changxing Geng ◽  
Yong Wang ◽  
Guofeng Zhu ◽  
Renyuan Shen

For the extraction of paddy rice seedling centerline, this study proposed a method based on Fast-SCNN (Fast Segmentation Convolutional Neural Network) semantic segmentation network. By training the FAST-SCNN network, the optimal model was selected to separate the seedling from the picture. Feature points were extracted using the FAST (Features from Accelerated Segment Test) corner detection algorithm after the pre-processing of original images. All the outer contours of the segmentation results were extracted, and feature point classification was carried out based on the extracted outer contour. For each class of points, Hough transformation based on known points was used to fit the seedling row centerline. It has been verified by experiments that this algorithm has high robustness in each period within three weeks after transplanting. In a 1280×1024-pixel PNG format color image, the accuracy of this algorithm is 95.9% and the average time of each frame is 158ms, which meets the real-time requirement of visual navigation in paddy field.


Sign in / Sign up

Export Citation Format

Share Document