scholarly journals DYNAMIC BACKGROUND SUBTRACTION BASED ON LOCAL DEPENDENCY HISTOGRAM

Author(s):  
SHENGPING ZHANG ◽  
HONGXUN YAO ◽  
SHAOHUI LIU

Traditional background subtraction methods perform poorly when scenes contain dynamic backgrounds such as waving tree branches, spouting fountain, illumination changes, camera jitters, etc. In this paper, from the view of spatial context, we present a novel and effective dynamic background method with three contributions. First, we present a novel local dependency descriptor, called local dependency histogram (LDH), to effectively model the spatial dependencies between a pixel and its neighboring pixels. The spatial dependencies contain substantial evidence for differentiating dynamic background regions from moving objects of interest. Second, based on the proposed LDH, an effective approach to dynamic background subtraction is proposed, in which each pixel is modeled as a group of weighted LDHs. Labeling a pixel as foreground or background is done by comparing the LDH computed in current frame against its model LDHs. The model LDHs are adaptively updated by the current LDH. Finally, unlike traditional approaches using a fixed threshold to judge whether a pixel matches to its model, an adaptive thresholding technique is also proposed. Experimental results on a diverse set of dynamic scenes validate that the proposed method significantly outperforms traditional methods for dynamic background subtraction.

Background subtraction is a key part to detect moving objects from the video in computer vision field. It is used to subtract reference frame to every new frame of video scenes. There are wide varieties of background subtraction techniques available in literature to solve real life applications like crowd analysis, human activity tracking system, traffic analysis and many more. Moreover, there were not enough benchmark datasets available which can solve all the challenges of subtraction techniques for object detection. Thus challenges were found in terms of dynamic background, illumination changes, shadow appearance, occlusion and object speed. In this perspective, we have tried to provide exhaustive literature survey on background subtraction techniques for video surveillance applications to solve these challenges in real situations. Additionally, we have surveyed eight benchmark video datasets here namely Wallflower, BMC, PET, IBM, CAVIAR, CD.Net, SABS and RGB-D along with their available ground truth. This study evaluates the performance of five background subtraction methods using performance parameters such as specificity, sensitivity, FNR, PWC and F-Score in order to identify an accurate and efficient method for detecting moving objects in less computational time.


2014 ◽  
Vol 556-562 ◽  
pp. 3549-3552
Author(s):  
Lian Fen Huang ◽  
Qing Yue Chen ◽  
Jin Feng Lin ◽  
He Zhi Lin

The key of background subtraction which is widely used in moving object detecting is to set up and update the background model. This paper presents a block background subtraction method based on ViBe, using the spatial correlation and time continuity of the video sequence. Set up the video sequence background model firstly. Then, update the background model through block processing. Finally employ the difference between the current frame and background model to extract moving objects.


2002 ◽  
Vol 02 (02) ◽  
pp. 163-178 ◽  
Author(s):  
YING REN ◽  
CHIN SENG CHUA ◽  
YEONG KHING HO

This paper proposes a new background subtraction method for detecting moving objects (foreground) from a time-varied background. While background subtraction has traditionally worked well for stationary backgrounds, for a non-stationary viewing sensor, motion compensation can be applied but is difficult to realize to sufficient pixel accuracy in practice, and the traditional background subtraction algorithm fails. The problem is further compounded when the moving target to be detected/tracked is small, since the pixel error in motion compensating the background will subsume the small target. A Spatial Distribution of Gaussians (SDG) model is proposed to deal with moving object detection under motion compensation that has been approximately carried out. The distribution of each background pixel is temporally and spatially modeled. Based on this statistical model, a pixel in the current frame is classified as belonging to the foreground or background. For this system to perform under lighting and environmental changes over an extended period of time, the background distribution must be updated with each incoming frame. A new background restoration and adaptation algorithm is developed for the time-varied background. Test cases involving the detection of small moving objects within a highly textured background and a pan-tilt tracking system based on a 2D background mosaic are demonstrated successfully.


Author(s):  
SUMIT KUMAR SINGH ◽  
MAGAN SINGH

Moving object segmentation has its own niche as an important topic in computer vision. It has avidly being pursued by researchers. Background subtraction method is generally used for segmenting moving objects. This method may also classify shadows as part of detected moving objects. Therefore, shadow detection and removal is an important step employed after moving object segmentation. However, these methods are adversely affected by changing environmental conditions. They are vulnerable to sudden illumination changes, and shadowing effects. Therefore, in this work we propose a faster, efficient and adaptive background subtraction method, which periodically updates the background frame and gives better results, and a shadow elimination method which removes shadows from the segmented objects with good discriminative power. Keywords- Moving object segmentation,


Author(s):  
Mourad Moussa ◽  
Maha Hmila ◽  
Ali Douik

Background subtraction methods are widely exploited for moving object detection in videos in many computer vision applications, such as traffic monitoring, human motion capture and video surveillance. The two most distinguishing and challenging aspects of such approaches in this application field are how to build correctly and efficiently the background model and how to prevent the false detection between; (1) moving background pixels and moving objects, (2) shadows pixel and moving objects. In this paper we present a new method for image segmentation using background subtraction. We propose an effective scheme for modelling and updating a background adaptively in dynamic scenes focus on statistical learning. We also introduce a method to detect sudden illumination changes and segment moving objects during these changes. Unlike the traditional color levels provided by RGB sensor aren’t the best choice, for this reason we propose a recursive algorithm that contributes to select very significant color space. Experimental results show significant improvements in moving object detection in dynamic scenes such as waving tree leaves and sudden illumination change, and it has a much lower computational cost compared to Gaussian mixture model.


2021 ◽  
Vol 11 (2) ◽  
pp. 645
Author(s):  
Xujie Kang ◽  
Jing Li ◽  
Xiangtao Fan ◽  
Hongdeng Jian ◽  
Chen Xu

Visual simultaneous localization and mapping (SLAM) is challenging in dynamic environments as moving objects can impair camera pose tracking and mapping. This paper introduces a method for robust dense bject-level SLAM in dynamic environments that takes a live stream of RGB-D frame data as input, detects moving objects, and segments the scene into different objects while simultaneously tracking and reconstructing their 3D structures. This approach provides a new method of dynamic object detection, which integrates prior knowledge of the object model database constructed, object-oriented 3D tracking against the camera pose, and the association between the instance segmentation results on the current frame data and an object database to find dynamic objects in the current frame. By leveraging the 3D static model for frame-to-model alignment, as well as dynamic object culling, the camera motion estimation reduced the overall drift. According to the camera pose accuracy and instance segmentation results, an object-level semantic map representation was constructed for the world map. The experimental results obtained using the TUM RGB-D dataset, which compares the proposed method to the related state-of-the-art approaches, demonstrating that our method achieves similar performance in static scenes and improved accuracy and robustness in dynamic scenes.


Author(s):  
Rekha V. ◽  
Natarajan K. ◽  
Innila Rose J.

Background Subtraction of a foreground object in multimedia is one of the major preprocessing steps involved in many vision-based applications. The main logic for detecting moving objects from the video is difference of the current frame and a reference frame which is called “background image” and this method is known as frame differencing method. Background Subtraction is widely used for real-time motion gesture recognition to be used in gesture enabled items like vehicles or automated gadgets. It is also used in content-based video coding, traffic monitoring, object tracking, digital forensics and human-computer interaction. Now-a-days due to advent in technology it is noticed that most of the conferences, meetings and interviews are done on video calls. It’s quite obvious that a conference room like atmosphere is not always readily available at any point of time. To eradicate this issue, an efficient algorithm for foreground extraction in a multimedia on video calls is very much needed. This paper is not to just build Background Subtraction application for Mobile Platform but to optimize the existing OpenCV algorithm to work on limited resources on mobile platform without reducing the performance. In this paper, comparison of various foreground detection, extraction and feature detection algorithms are done on mobile platform using OpenCV. The set of experiments were conducted to appraise the efficiency of each algorithm over the other. The overall performances of these algorithms were compared on the basis of execution time, resolution and resources required.


Algorithms ◽  
2019 ◽  
Vol 12 (7) ◽  
pp. 128 ◽  
Author(s):  
Tianming Yu ◽  
Jianhua Yang ◽  
Wei Lu

Advancing the background-subtraction method in dynamic scenes is an ongoing timely goal for many researchers. Recently, background subtraction methods have been developed with deep convolutional features, which have improved their performance. However, most of these deep methods are supervised, only available for a certain scene, and have high computational cost. In contrast, the traditional background subtraction methods have low computational costs and can be applied to general scenes. Therefore, in this paper, we propose an unsupervised and concise method based on the features learned from a deep convolutional neural network to refine the traditional background subtraction methods. For the proposed method, the low-level features of an input image are extracted from the lower layer of a pretrained convolutional neural network, and the main features are retained to further establish the dynamic background model. The evaluation of the experiments on dynamic scenes demonstrates that the proposed method significantly improves the performance of traditional background subtraction methods.


Sensors ◽  
2021 ◽  
Vol 21 (24) ◽  
pp. 8374
Author(s):  
Yupei Zhang ◽  
Kwok-Leung Chan

Detecting saliency in videos is a fundamental step in many computer vision systems. Saliency is the significant target(s) in the video. The object of interest is further analyzed for high-level applications. The segregation of saliency and the background can be made if they exhibit different visual cues. Therefore, saliency detection is often formulated as background subtraction. However, saliency detection is challenging. For instance, dynamic background can result in false positive errors. In another scenario, camouflage will result in false negative errors. With moving cameras, the captured scenes are even more complicated to handle. We propose a new framework, called saliency detection via background model completion (SD-BMC), that comprises a background modeler and a deep learning background/foreground segmentation network. The background modeler generates an initial clean background image from a short image sequence. Based on the idea of video completion, a good background frame can be synthesized with the co-existence of changing background and moving objects. We adopt the background/foreground segmenter, which was pre-trained with a specific video dataset. It can also detect saliency in unseen videos. The background modeler can adjust the background image dynamically when the background/foreground segmenter output deteriorates during processing a long video. To the best of our knowledge, our framework is the first one to adopt video completion for background modeling and saliency detection in videos captured by moving cameras. The F-measure results, obtained from the pan-tilt-zoom (PTZ) videos, show that our proposed framework outperforms some deep learning-based background subtraction models by 11% or more. With more challenging videos, our framework also outperforms many high-ranking background subtraction methods by more than 3%.


Sign in / Sign up

Export Citation Format

Share Document