A Moving Object Detection Process for Computer Vision Application

Author(s):  
I Shieh ◽  
K F Gill

The aim of this paper is to present a novel method for processing a digitized image that will allow pertinent information to be extracted on object movement in a scene. A frame difference method locates the moving candidates in a region which is evaluated by a hypothesis testing procedure to identify accretion and deletion regions. Accretion regions are selected and used as seeds to search for moving objects in the current frame. Contour tracing is applied to establish the boundary of an accretion region which is then used to help recognize the moving object. The results of this work reveal that motion can be used as an effective cue for object detection from an image sequence.

With the advent in technology, security and authentication has become the main aspect in computer vision approach. Moving object detection is an efficient system with the goal of preserving the perceptible and principal source in a group. Surveillance is one of the most crucial requirements and carried out to monitor various kinds of activities. The detection and tracking of moving objects are the fundamental concept that comes under the surveillance systems. Moving object recognition is challenging approach in the field of digital image processing. Moving object detection relies on few of the applications which are Human Machine Interaction (HMI), Safety and video Surveillance, Augmented Realism, Transportation Monitoring on Roads, Medical Imaging etc. The main goal of this research is the detection and tracking moving object. In proposed approach, based on the pre-processing method in which there is extraction of the frames with reduction of dimension. It applies the morphological methods to clean the foreground image in the moving objects and texture based feature extract using component analysis method. After that, design a novel method which is optimized multilayer perceptron neural network. It used the optimized layers based on the Pbest and Gbest particle position in the objects. It finds the fitness values which is binary values (x_update, y_update) of swarm or object positions. Method and output achieved final frame creation of the moving objects in the video using BLOB ANALYSER In this research , an application is designed using MATLAB VERSION 2016a In activation function to re-filter the given input and final output calculated with the help of pre-defined sigmoid. In proposed methods to find the clear detection and tracking in the given dataset MOT, FOOTBALL, INDOOR and OUTDOOR datasets. To improve the detection accuracy rate, recall rate and reduce the error rates, False Positive and Negative rate and compare with the various classifiers such as KNN, MLPNN and J48 decision Tree.


2017 ◽  
Vol 2017 ◽  
pp. 1-9 ◽  
Author(s):  
Yizhong Yang ◽  
Qiang Zhang ◽  
Pengfei Wang ◽  
Xionglou Hu ◽  
Nengju Wu

Moving object detection in video streams is the first step of many computer vision applications. Background modeling and subtraction for moving detection is the most common technique for detecting, while how to detect moving objects correctly is still a challenge. Some methods initialize the background model at each pixel in the first N frames. However, it cannot perform well in dynamic background scenes since the background model only contains temporal features. Herein, a novel pixelwise and nonparametric moving object detection method is proposed, which contains both spatial and temporal features. The proposed method can accurately detect the dynamic background. Additionally, several new mechanisms are also proposed to maintain and update the background model. The experimental results based on image sequences in public datasets show that the proposed method provides the robustness and effectiveness in dynamic background scenes compared with the existing methods.


2020 ◽  
Vol 17 (4) ◽  
pp. 172988142094727
Author(s):  
Wenlong Zhang ◽  
Xiaoliang Sun ◽  
Qifeng Yu

Due to the clutter background motion, accurate moving object segmentation in unconstrained videos remains a significant open problem, especially for the slow-moving object. This article proposes an accurate moving object segmentation method based on robust seed selection. The seed pixels of the object and background are selected robustly by using the optical flow cues. Firstly, this article detects the moving object’s rough contour according to the local difference in the weighted orientation cues of the optical flow. Then, the detected rough contour is used to guide the object and the background seed pixel selection. The object seed pixels in the previous frame are propagated to the current frame according to the optical flow to improve the robustness of the seed selection. Finally, we adopt the random walker algorithm to segment the moving object accurately according to the selected seed pixels. Experiments on publicly available data sets indicate that the proposed method shows excellent performance in segmenting moving objects accurately in unconstraint videos.


2014 ◽  
Vol 556-562 ◽  
pp. 3549-3552
Author(s):  
Lian Fen Huang ◽  
Qing Yue Chen ◽  
Jin Feng Lin ◽  
He Zhi Lin

The key of background subtraction which is widely used in moving object detecting is to set up and update the background model. This paper presents a block background subtraction method based on ViBe, using the spatial correlation and time continuity of the video sequence. Set up the video sequence background model firstly. Then, update the background model through block processing. Finally employ the difference between the current frame and background model to extract moving objects.


Symmetry ◽  
2020 ◽  
Vol 12 (12) ◽  
pp. 1965
Author(s):  
Juncai Zhu ◽  
Zhizhong Wang ◽  
Songwei Wang ◽  
Shuli Chen

Detecting moving objects in a video sequence is an important problem in many vision-based applications. In particular, detecting moving objects when the camera is moving is a difficult problem. In this study, we propose a symmetric method for detecting moving objects in the presence of a dynamic background. First, a background compensation method is used to detect the proposed region of motion. Next, in order to accurately locate the moving objects, we propose a convolutional neural network-based method called YOLOv3-SOD for detecting all objects in the image, which is lightweight and specifically designed for small objects. Finally, the moving objects are determined by fusing the results obtained by motion detection and object detection. Missed detections are recalled according to the temporal and spatial information in adjacent frames. A dataset is not currently available specifically for moving object detection and recognition, and thus, we have released the MDR105 dataset comprising three classes with 105 videos. Our experiments demonstrated that the proposed algorithm can accurately detect moving objects in various scenarios with good overall performance.


Symmetry ◽  
2019 ◽  
Vol 11 (1) ◽  
pp. 34 ◽  
Author(s):  
Jisang Yoo ◽  
Gyu-cheol Lee

Moving object detection task can be solved by the background subtraction algorithm if the camera is fixed. However, because the background moves, detecting moving objects in a moving car is a difficult problem. There were attempts to detect moving objects using LiDAR or stereo cameras, but when the car moved, the detection rate decreased. We propose a moving object detection algorithm using an object motion reflection model of motion vectors. The proposed method first obtains the disparity map by searching the corresponding region between stereo images. Then, we estimate road by applying v-disparity method to the disparity map. The optical flow is used to acquire the motion vectors of symmetric pixels between adjacent frames where the road has been removed. We designed a probability model of how much the local motion is reflected in the motion vector to determine if the object is moving. We have experimented with the proposed method on two datasets, and confirmed that the proposed method detects moving objects with higher accuracy than other methods.


2014 ◽  
Vol 2014 ◽  
pp. 1-10 ◽  
Author(s):  
Jinhai Xiang ◽  
Heng Fan ◽  
Honghong Liao ◽  
Jun Xu ◽  
Weiping Sun ◽  
...  

Moving object detection is a fundamental step in video surveillance system. To eliminate the influence of illumination change and shadow associated with the moving objects, we proposed a local intensity ratio model (LIRM) which is robust to illumination change. Based on the analysis of the illumination and shadow model, we discussed the distribution of local intensity ratio. And the moving objects are segmented without shadow using normalized local intensity ratio via Gaussian mixture model (GMM). Then erosion is used to get the moving objects contours and erase the scatter shadow patches and noises. After that, we get the enhanced moving objects contours by a new contour enhancement method, in which foreground ratio and spatial relation are considered. At last, a new method is used to fill foreground with holes. Experimental results demonstrate that the proposed approach can get moving objects without cast shadow and shows excellent performance under various illumination change conditions.


Author(s):  
Marcus Laumer ◽  
Peter Amon ◽  
Andreas Hutter ◽  
André Kaup

This paper presents a moving object detection algorithm for H.264/AVC video streams that is applied in the compressed domain. The method is able to extract and analyze several syntax elements from any H.264/AVC-compliant bit stream. The number of analyzed syntax elements depends on the mode in which the method operates. The algorithm is able to perform either a spatiotemporal analysis in a single step or a two-step analysis that starts with a spatial analysis of each frame, followed by a temporal analysis of several subsequent frames. Thereby, in each mode either only (sub-)macroblock types and partition modes or, additionally, quantization parameters are analyzed. The evaluation of these syntax elements enables the algorithm to determine a “weight” for each 4×4 block of pixels that indicates the level of motion within this block. A final segmentation after creating these weights segments each frame to foreground and background and hence indicates the positions and sizes of all moving objects. Our experiments show that the algorithm is able to efficiently detect moving objects in the compressed domain and that it is configurable to process a large number of parallel bit streams in real time.


Sign in / Sign up

Export Citation Format

Share Document