Dynamic background modeling using deep learning autoencoder network

2019 ◽  
Vol 79 (7-8) ◽  
pp. 4639-4659 ◽  
Author(s):  
Jeffin Gracewell ◽  
Mala John
Sensors ◽  
2019 ◽  
Vol 19 (12) ◽  
pp. 2672
Author(s):  
Wenhui Li ◽  
Jianqi Zhang ◽  
Ying Wang

The pixel-based adaptive segmenter (PBAS) is a classic background modeling algorithm for change detection. However, it is difficult for the PBAS method to detect foreground targets in dynamic background regions. To solve this problem, based on PBAS, a weighted pixel-based adaptive segmenter named WePBAS for change detection is proposed in this paper. WePBAS uses weighted background samples as a background model. In the PBAS method, the samples in the background model are not weighted. In the weighted background sample set, the low-weight background samples typically represent the wrong background pixels and need to be replaced. Conversely, high-weight background samples need to be preserved. According to this principle, a directional background model update mechanism is proposed to improve the segmentation performance of the foreground targets in the dynamic background regions. In addition, due to the “background diffusion” mechanism, the PBAS method often identifies small intermittent motion foreground targets as background. To solve this problem, an adaptive foreground counter was added to the WePBAS to limit the “background diffusion” mechanism. The adaptive foreground counter can automatically adjust its own parameters based on videos’ characteristics. The experiments showed that the proposed method is competitive with the state-of-the-art background modeling method for change detection.


2019 ◽  
Author(s):  
◽  
Hayder Yousif

[ACCESS RESTRICTED TO THE UNIVERSITY OF MISSOURI AT REQUEST OF AUTHOR.] Camera traps are a popular tool to sample animal populations because they are noninvasive, detect a variety of species, and can record many thousands of animal detections per deployment. Cameras are typically set to take bursts of multiple images for each detection, and are deployed in arrays of dozens or hundreds of sites, often resulting in millions of images per study. The task of converting images to animal detection records from such large image collections is daunting, and made worse by situations that generate copious empty pictures from false triggers (e.g. camera malfunction or moving vegetation) or pictures of humans. We offer the first widely available computer vision tool for processing camera trap images. Our results show that the tool is accurate and results in substantial time savings for processing large image datasets, thus improving our ability to monitor wildlife across large scales with camera traps. In this dissertation, we have developed new image/video processing and computer vision algorithms for efficient and accurate object detection and sequence-level classiffication from natural scene camera-trap images. This work addresses the following five major tasks: (1) Human-animal detection. We develop a fast and accurate scheme for human-animal detection from highly cluttered camera-trap images using joint background modeling and deep learning classification. Specifically, first, We develop an effective background modeling and subtraction scheme to generate region proposals for the foreground objects. We then develop a cross-frame image patch verification to reduce the number of foreground object proposals. Finally, We perform complexity-accuracy analysis of deep convolutional neural networks (DCNN) to develop a fast deep learning classification scheme to classify these region proposals into three categories: human, animals, and background patches. The optimized DCNN is able to maintain high level of accuracy while reducing the computational complexity by 14 times. Our experimental results demonstrate that the proposed method outperforms existing methods on the camera-trap dataset. (2) Object segmentation from natural scene. We first design and train a fast DCNN for animal-human-background object classification, which is used to analyze the input image to generate multi-layer feature maps, representing the responses of different image regions to the animal-human-background classifier. From these feature maps, we construct the so-called deep objectness graph for accurate animal-human object segmentation with graph cut. The segmented object regions from each image in the sequence are then verfied and fused in the temporal domain using background modeling. Our experimental results demonstrate that our proposed method outperforms existing state-of-the-art methods on the camera-trap dataset with highly cluttered natural scenes. (3) DCNN domain background modeling. We replaced the background model with a new more efficient deep learning based model. The input frames are segmented into regions through the deep objectness graph then the region boundaries of the input frames are multiplied by each other to obtain the regions of movement patches. We construct the background representation using the temporal information of the co-located patches. We propose to fuse the subtraction and foreground/background pixel classiffcation of two representation : a) chromaticity and b) deep pixel information. (4) Sequence-level object classiffcation. We proposed a new method for sequence-level video recognition with application to animal species recognition from camera trap images. First, using background modeling and cross-frame patch verification, we developed a scheme to generate candidate object regions or object proposals in the spatiotemporal domain. Second, we develop a dynamic programming optimization approach to identify the best temporal subset of object proposals. Third, we aggregate and fuse the features of these selected object proposals for efficient sequence-level animal species classification.


2017 ◽  
Vol 60 (11) ◽  
pp. 2287-2302
Author(s):  
LiZhong Peng ◽  
Fan Zhang ◽  
BingYin Zhou

2018 ◽  
Vol 2018 ◽  
pp. 1-11
Author(s):  
Tianming Yu ◽  
Jianhua Yang ◽  
Wei Lu

Background modeling plays an important role in the application of intelligent video surveillance. Researchers have presented diverse approaches to support the development of dynamic background modeling. However, in the case of pumping unit surveillance, traditional background modeling methods often mistakenly detect the periodic rotational pumping unit as the foreground object. To address this problem here, we propose a novel background modeling method for foreground segmentation, particularly in dynamic scenes that include a rotational pumping unit. In the proposed method, the ViBe method is employed to extract possible foreground pixels from the sequence frames and then segment the video image into dynamic and static regions. Subsequently, the kernel density estimation (KDE) method is used to build a background model with dynamic samples of each pixel. The bandwidth and threshold of the KDE model are calculated according to the sample distribution and extremum of each dynamic pixel. In addition, the strategy of sample adjustment combines regular and real-time updates. The performance of the proposed method is evaluated against several state-of-the-art methods applied to complex dynamic scenes consisting of a rotational pumping unit. Experimental results show that the proposed method is available for periodic object motion scenario monitoring applications.


2020 ◽  
Vol 2020 ◽  
pp. 1-13
Author(s):  
Yingying Yue ◽  
Dan Xu ◽  
Zhiming Qian ◽  
Hongzhen Shi ◽  
Hao Zhang

Foreground target detection algorithm (FTDA) is a fundamental preprocessing step in computer vision and video processing. A universal background subtraction algorithm for video sequences (ViBe) is a fast, simple, efficient and with optimal sample attenuation FTDA based on background modeling. However, the traditional ViBe has three limitations: (1) the noise problem under dynamic background; (2) the ghost problem; and (3) the target adhesion problem. In order to solve the three problems above, ant colony clustering is introduced and Ant_ViBe is proposed in this paper to improve the background modeling mechanism of the traditional ViBe, from the aspects of initial sample modeling, pheromone and ant colony update mechanism, and foreground segmentation criterion. Experimental results show that the Ant_ViBe greatly improved the noise resistance under dynamic background, eased the ghost and targets adhesion problem, and surpassed the typical algorithms and their fusion algorithms in most evaluation indexes.


Sign in / Sign up

Export Citation Format

Share Document