Adaptive Background Image Calculation for Video Sequences Based on Optical Flow

2014 ◽  
Vol 945-949 ◽  
pp. 1820-1824
Author(s):  
Hui Zhu ◽  
Xiao Peng Ji

A new method is proposed to calculate the background in video sequences. The optical flow is estimated to determine the local regions occupied by moving objects. The background image is calculated by an efficient averaging process excluding the moving object regions, which overcomes the foreground-occluding problem in direct averaging method for background estimation. The experiments for traffic video processing prove the method’s effectiveness and robustness.

2020 ◽  
Vol 17 (4) ◽  
pp. 172988142094727
Author(s):  
Wenlong Zhang ◽  
Xiaoliang Sun ◽  
Qifeng Yu

Due to the clutter background motion, accurate moving object segmentation in unconstrained videos remains a significant open problem, especially for the slow-moving object. This article proposes an accurate moving object segmentation method based on robust seed selection. The seed pixels of the object and background are selected robustly by using the optical flow cues. Firstly, this article detects the moving object’s rough contour according to the local difference in the weighted orientation cues of the optical flow. Then, the detected rough contour is used to guide the object and the background seed pixel selection. The object seed pixels in the previous frame are propagated to the current frame according to the optical flow to improve the robustness of the seed selection. Finally, we adopt the random walker algorithm to segment the moving object accurately according to the selected seed pixels. Experiments on publicly available data sets indicate that the proposed method shows excellent performance in segmenting moving objects accurately in unconstraint videos.


2013 ◽  
Vol 321-324 ◽  
pp. 1041-1045
Author(s):  
Jian Rong Cao ◽  
Yang Xu ◽  
Cai Yun Liu

After background modeling and segmenting of moving object for surveillance video, this paper firstly presented a noninteractive matting algorithm of video moving object based on GrabCut. These matted moving objects then were placed in a background image on the condition of nonoverlapping arrangement, so a frame could be obtained with several moving objects placed in a background image. Finally, a series of these frame images could be achieved in timeline and a single camera surveillance video synopsis could be formed. The experimental results show that this video synopsis has the features of conciseness and readable concentrated form and the efficiency of browsing and retrieval can be improved.


2021 ◽  
Author(s):  
Zhenhe Chen

Video object extration is one of the most important areas of video processing in which objects from video sequences are extracted and used for many applications such as surveillance systems, pattern recognition etc. In this research work, an object-based technique based on the spatiotemporal independent component analysis (stICA) is developed to extract moving objects from video sequences. Using the stICA, the preliminary source images containing moving objects in the video sequence are extracted. These images are processed using wavelet analysis, edge detection, region growing and multiscale segmentation techniques to improve the accuracy of the extracted objects. A novel compensation method is applied to deal with the nonlinear problem caused by the application of the stICA directly to the video sequences. The recovered objects are indexed by the singular calue decompensation (SVD) and linear combination analysis. Simulation results demonstrate the effectiveness of the stICA-based object extraction technique in content-based video processing applications.


2019 ◽  
Vol 8 (3) ◽  
pp. 5740-5745

Background reckoning and the foreground, play prominent roles in the tasks of visual detection and tracking of objects. Moving Object Detection has been widely used in sundry discipline such as intelligent systems, security systems, video monitoring systems, banking places, provisionary systems, and so on. In this paper proposes moving objects detection and tracking method based on Embedded Video Surveillance. The method is based on using lines computed by a gradient-based optical flow and an edge detector gradient-based optical flow and edges are well matched for accurate computation of velocity, not much attention is paid to creating systems for tracking objects using this feature. The proposed method is compared with a recent work, proving its superior performance and when we want to represent high quality videos and images with, lower bit rate, and also suitable for real-world live video applications. This method reduces influences of foreground objects to the background model. The simulation results show that the background image can be obtained precisely and the moving objects recognition is achieved effectively


Author(s):  
Bruno Sauvalle ◽  
Arnaud de La Fortelle

The goal of background reconstruction is to recover the background image of a scene from a sequence of frames showing this scene cluttered by various moving objects. This task is fundamental in image analysis, and is generally the first step before more advanced processing, but difficult because there is no formal definition of what should be considered as background or foreground and the results may be severely impacted by various challenges such as illumination changes, intermittent object motions, highly cluttered scenes, etc. We propose in this paper a new iterative algorithm for background reconstruction, where the current estimate of the background is used to guess which image pixels are background pixels and a new background estimation is performed using those pixels only. We then show that the proposed algorithm, which uses stochastic gradient descent for improved regularization, is more accurate than the state of the art on the challenging SBMnet dataset, especially for short videos with low frame rates, and is also fast, reaching an average of 52 fps on this dataset when parameterized for maximal accuracy using GPU acceleration and a Python implementation.


2019 ◽  
Vol 14 (1) ◽  
pp. 21-30
Author(s):  
A. Shyamala ◽  
S. Selvaperumal ◽  
G. Prabhakar

Background: Moving object detection in dynamic environment video is more complex than the static environment videos. In this paper, moving objects in video sequences are detected and segmented using feature extraction based Adaptive Neuro-Fuzzy Inference System (ANFIS) classifier approach. The proposed moving object detection methodology is tested on different video sequences in both indoor and outdoor environments. Methods: This proposed methodology consists of background subtraction and classification modules. The absolute difference image is constructed in background subtraction module. The features are extracted from this difference image and these extracted features are trained and classified using ANFIS classification module. Results: The proposed moving object detection methodology is analyzed in terms of Accuracy, Recall, Average Accuracy, Precision and F-measure. The proposed moving object segmentation methodology is executed on different Central Processing Unit (CPU) processor as 1.8 GHz and 2.4 GHz for evaluating the performance during moving object segmentation. At present, some moving object detection systems used 1.8 GHz CPU processor. Recently, many systems for moving object detection are using 2.4 GHz CPU processor. Hence, CPU processors 1.8 GHz and 2.4 GHz are used in this paper for detecting the moving objects in video sequences. Table 1 shows the performance evaluation of proposed moving object detection on CPU processor 1.8 GHz (100 sequence). Table 2 shows the performance evaluation of proposed moving object detection on CPU processor 2.8 GHz (100 sequence). The average moving object detection time on CPU processor 1.8 GHz for fountain sequence is 62.5 seconds, for airport sequence is 64.7 seconds, for meeting room sequence is 71.6 seconds and for Lobby sequence is 73.5 seconds, respectively, as depicted in Table 3. The average elapsed time for moving object detection on 100 sequences is 68.07 seconds. The average moving object detection time on CPU processor 2.4 GHz for fountain sequence is 56.5 seconds, for airport sequence is 54.7 seconds, for meeting room sequence is 65.8 seconds and for Lobby sequence is 67.5 seconds, respectively, as depicted in Table 4. The average elapsed time for moving object detection on 100 sequences is 61.12 seconds. It is very clear from Table 3 and Table 4; the moving object detection time is reduced when the frequency of the CPU processor increases. Conclusion: In this paper, moving object is detected and segmented using ANFIS classifier. The proposed method initially segments the background image and then features are extracted from the threshold image. These features are trained and classified using ANFIS classification method. The proposed moving object detection method is tested on different video sequences which are obtained from different indoor and outdoor environments. The performance of the proposed moving object detection and segmentation methodology is analyzed in terms of Accuracy, Recall, Average Accuracy, Precision and F-measure.


Sign in / Sign up

Export Citation Format

Share Document