scholarly journals Segmentation of Moving Objects using Numerous Background Subtraction Methods for Surveillance Applications

Background subtraction is a key part to detect moving objects from the video in computer vision field. It is used to subtract reference frame to every new frame of video scenes. There are wide varieties of background subtraction techniques available in literature to solve real life applications like crowd analysis, human activity tracking system, traffic analysis and many more. Moreover, there were not enough benchmark datasets available which can solve all the challenges of subtraction techniques for object detection. Thus challenges were found in terms of dynamic background, illumination changes, shadow appearance, occlusion and object speed. In this perspective, we have tried to provide exhaustive literature survey on background subtraction techniques for video surveillance applications to solve these challenges in real situations. Additionally, we have surveyed eight benchmark video datasets here namely Wallflower, BMC, PET, IBM, CAVIAR, CD.Net, SABS and RGB-D along with their available ground truth. This study evaluates the performance of five background subtraction methods using performance parameters such as specificity, sensitivity, FNR, PWC and F-Score in order to identify an accurate and efficient method for detecting moving objects in less computational time.

Author(s):  
SHENGPING ZHANG ◽  
HONGXUN YAO ◽  
SHAOHUI LIU

Traditional background subtraction methods perform poorly when scenes contain dynamic backgrounds such as waving tree branches, spouting fountain, illumination changes, camera jitters, etc. In this paper, from the view of spatial context, we present a novel and effective dynamic background method with three contributions. First, we present a novel local dependency descriptor, called local dependency histogram (LDH), to effectively model the spatial dependencies between a pixel and its neighboring pixels. The spatial dependencies contain substantial evidence for differentiating dynamic background regions from moving objects of interest. Second, based on the proposed LDH, an effective approach to dynamic background subtraction is proposed, in which each pixel is modeled as a group of weighted LDHs. Labeling a pixel as foreground or background is done by comparing the LDH computed in current frame against its model LDHs. The model LDHs are adaptively updated by the current LDH. Finally, unlike traditional approaches using a fixed threshold to judge whether a pixel matches to its model, an adaptive thresholding technique is also proposed. Experimental results on a diverse set of dynamic scenes validate that the proposed method significantly outperforms traditional methods for dynamic background subtraction.


2002 ◽  
Vol 02 (02) ◽  
pp. 163-178 ◽  
Author(s):  
YING REN ◽  
CHIN SENG CHUA ◽  
YEONG KHING HO

This paper proposes a new background subtraction method for detecting moving objects (foreground) from a time-varied background. While background subtraction has traditionally worked well for stationary backgrounds, for a non-stationary viewing sensor, motion compensation can be applied but is difficult to realize to sufficient pixel accuracy in practice, and the traditional background subtraction algorithm fails. The problem is further compounded when the moving target to be detected/tracked is small, since the pixel error in motion compensating the background will subsume the small target. A Spatial Distribution of Gaussians (SDG) model is proposed to deal with moving object detection under motion compensation that has been approximately carried out. The distribution of each background pixel is temporally and spatially modeled. Based on this statistical model, a pixel in the current frame is classified as belonging to the foreground or background. For this system to perform under lighting and environmental changes over an extended period of time, the background distribution must be updated with each incoming frame. A new background restoration and adaptation algorithm is developed for the time-varied background. Test cases involving the detection of small moving objects within a highly textured background and a pan-tilt tracking system based on a 2D background mosaic are demonstrated successfully.


Author(s):  
SUMIT KUMAR SINGH ◽  
MAGAN SINGH

Moving object segmentation has its own niche as an important topic in computer vision. It has avidly being pursued by researchers. Background subtraction method is generally used for segmenting moving objects. This method may also classify shadows as part of detected moving objects. Therefore, shadow detection and removal is an important step employed after moving object segmentation. However, these methods are adversely affected by changing environmental conditions. They are vulnerable to sudden illumination changes, and shadowing effects. Therefore, in this work we propose a faster, efficient and adaptive background subtraction method, which periodically updates the background frame and gives better results, and a shadow elimination method which removes shadows from the segmented objects with good discriminative power. Keywords- Moving object segmentation,


Robotica ◽  
1998 ◽  
Vol 16 (1) ◽  
pp. 109-116 ◽  
Author(s):  
Han Wang ◽  
Choon Seng Chua ◽  
Ching Tong Sim

This paper reports a visual tracking system that can track moving objects in real-time with a modest workstation equipped with a pan-tilt device. The algorithm essentially has three parts: (1) feature detection, (2) tracking and (3) control of the robot head. Corners are viewpoint invariant, hence being utilised as the beacon for tracking. Tracking is performed in two stages of Kalman filtering and affine transformation. A technique of reducing greatly the computational time for the correlaton is also described. The Kalman filter predicts intelligently the fovea window and reduced computation dramatically. The affine transformation deals with the unexpected events when there is partial occlusion.


Sensors ◽  
2021 ◽  
Vol 21 (24) ◽  
pp. 8374
Author(s):  
Yupei Zhang ◽  
Kwok-Leung Chan

Detecting saliency in videos is a fundamental step in many computer vision systems. Saliency is the significant target(s) in the video. The object of interest is further analyzed for high-level applications. The segregation of saliency and the background can be made if they exhibit different visual cues. Therefore, saliency detection is often formulated as background subtraction. However, saliency detection is challenging. For instance, dynamic background can result in false positive errors. In another scenario, camouflage will result in false negative errors. With moving cameras, the captured scenes are even more complicated to handle. We propose a new framework, called saliency detection via background model completion (SD-BMC), that comprises a background modeler and a deep learning background/foreground segmentation network. The background modeler generates an initial clean background image from a short image sequence. Based on the idea of video completion, a good background frame can be synthesized with the co-existence of changing background and moving objects. We adopt the background/foreground segmenter, which was pre-trained with a specific video dataset. It can also detect saliency in unseen videos. The background modeler can adjust the background image dynamically when the background/foreground segmenter output deteriorates during processing a long video. To the best of our knowledge, our framework is the first one to adopt video completion for background modeling and saliency detection in videos captured by moving cameras. The F-measure results, obtained from the pan-tilt-zoom (PTZ) videos, show that our proposed framework outperforms some deep learning-based background subtraction models by 11% or more. With more challenging videos, our framework also outperforms many high-ranking background subtraction methods by more than 3%.


2018 ◽  
Vol 2 (1) ◽  
Author(s):  
Fatima Ameen ◽  
Ziad Mohammed ◽  
Abdulrahman Siddiq

Tracking systems of moving objects provide a useful means to better control, manage and secure them. Tracking systems are used in different scales of applications such as indoors, outdoors and even used to track vehicles, ships and air planes moving over the globe. This paper presents the design and implementation of a system for tracking objects moving over a wide geographical area. The system depends on the Global Positioning System (GPS) and Global System for Mobile Communications (GSM) technologies without requiring the Internet service. The implemented system uses the freely available GPS service to determine the position of the moving objects. The tests of the implemented system in different regions and conditions show that the maximum uncertainty in the obtained positions is a circle with radius of about 16 m, which is an acceptable result for tracking the movement of objects in wide and open environments.


Symmetry ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 645
Author(s):  
Muhammad Farooq ◽  
Sehrish Sarfraz ◽  
Christophe Chesneau ◽  
Mahmood Ul Hassan ◽  
Muhammad Ali Raza ◽  
...  

Expectiles have gained considerable attention in recent years due to wide applications in many areas. In this study, the k-nearest neighbours approach, together with the asymmetric least squares loss function, called ex-kNN, is proposed for computing expectiles. Firstly, the effect of various distance measures on ex-kNN in terms of test error and computational time is evaluated. It is found that Canberra, Lorentzian, and Soergel distance measures lead to minimum test error, whereas Euclidean, Canberra, and Average of (L1,L∞) lead to a low computational cost. Secondly, the performance of ex-kNN is compared with existing packages er-boost and ex-svm for computing expectiles that are based on nine real life examples. Depending on the nature of data, the ex-kNN showed two to 10 times better performance than er-boost and comparable performance with ex-svm regarding test error. Computationally, the ex-kNN is found two to five times faster than ex-svm and much faster than er-boost, particularly, in the case of high dimensional data.


Drones ◽  
2021 ◽  
Vol 5 (2) ◽  
pp. 37
Author(s):  
Bingsheng Wei ◽  
Martin Barczyk

We consider the problem of vision-based detection and ranging of a target UAV using the video feed from a monocular camera onboard a pursuer UAV. Our previously published work in this area employed a cascade classifier algorithm to locate the target UAV, which was found to perform poorly in complex background scenes. We thus study the replacement of the cascade classifier algorithm with newer machine learning-based object detection algorithms. Five candidate algorithms are implemented and quantitatively tested in terms of their efficiency (measured as frames per second processing rate), accuracy (measured as the root mean squared error between ground truth and detected location), and consistency (measured as mean average precision) in a variety of flight patterns, backgrounds, and test conditions. Assigning relative weights of 20%, 40% and 40% to these three criteria, we find that when flying over a white background, the top three performers are YOLO v2 (76.73 out of 100), Faster RCNN v2 (63.65 out of 100), and Tiny YOLO (59.50 out of 100), while over a realistic background, the top three performers are Faster RCNN v2 (54.35 out of 100, SSD MobileNet v1 (51.68 out of 100) and SSD Inception v2 (50.72 out of 100), leading us to recommend Faster RCNN v2 as the recommended solution. We then provide a roadmap for further work in integrating the object detector into our vision-based UAV tracking system.


Sign in / Sign up

Export Citation Format

Share Document