scholarly journals Background Subtraction Using Spatio-Temporal Group Sparsity Recovery

2018 ◽  
Vol 28 (8) ◽  
pp. 1737-1751 ◽  
Author(s):  
Xin Liu ◽  
Jiawen Yao ◽  
Xiaopeng Hong ◽  
Xiaohua Huang ◽  
Ziheng Zhou ◽  
...  
2018 ◽  
pp. 1431-1460
Author(s):  
Jeyabharathi D ◽  
Dejey D

Developing universal methods for background subtraction and object tracking is one of the critical and hardest challenges in many video processing and computer-vision applications. To achieve superior foreground detection quality across unconstrained scenarios, a novel Two Layer Rotational Symmetry Dynamic Texture (RSDT) model is proposed, which avoids illumination variations by using two layers of spatio temporal patches. Spatio temporal patches describe both motion and appearance parameters in a video sequence. The concept of key frame is used to avoid redundant samples. Auto Regressive Integrated Moving Average model (ARIMA) (Hyndman & Rob, 2015) estimates the statistical parameters from the subspace. Uniform Local Derivative Pattern (LDP) (Zhang et al., 2010) acts as a feature for tracking objects in a video. Extensive experimental evaluations on a wide range of benchmark datasets validate the efficiency of RSDT compared to Center Symmetric Spatio Temporal Local Ternary Pattern (CS-STLTP) (Lin et al., 2015) for unconstrained video analytics.


Author(s):  
Jae Hyup Jeong ◽  
Jun Woo Lee ◽  
Jong Wook Kang ◽  
Dong Seok Jeong

Author(s):  
Jeyabharathi D ◽  
Dejey D

Developing universal methods for background subtraction and object tracking is one of the critical and hardest challenges in many video processing and computer-vision applications. To achieve superior foreground detection quality across unconstrained scenarios, a novel Two Layer Rotational Symmetry Dynamic Texture (RSDT) model is proposed, which avoids illumination variations by using two layers of spatio temporal patches. Spatio temporal patches describe both motion and appearance parameters in a video sequence. The concept of key frame is used to avoid redundant samples. Auto Regressive Integrated Moving Average model (ARIMA) (Hyndman & Rob, 2015) estimates the statistical parameters from the subspace. Uniform Local Derivative Pattern (LDP) (Zhang et al., 2010) acts as a feature for tracking objects in a video. Extensive experimental evaluations on a wide range of benchmark datasets validate the efficiency of RSDT compared to Center Symmetric Spatio Temporal Local Ternary Pattern (CS-STLTP) (Lin et al., 2015) for unconstrained video analytics.


Author(s):  
Meenal Suryakant Vatsaraj ◽  
Rajan Vishnu Parab ◽  
D S Bade

Anomalous behavior detection and localization in videos of the crowded area that is specific from a dominant pattern are obtained. Appearance and motion information are taken into account to robustly identify different kinds of an anomaly considering a wide range of scenes. Our concept based on a histogram of oriented gradients and Markov random field easily captures varying dynamic of the crowded environment.Histogram of oriented gradients along with well-known Markov random field will effectively recognize and characterizes each frame of each scene. Anomaly detection using artificial neural network consist both appearance and motion features which extract within spatio temporal domain of moving pixels that ensures robustness to local noise and thus increases accuracy in detection of a local anomaly with low computational cost.To extract a region of interest we have to subtract background. Background subtraction is done by various methods like Weighted moving mean, Gaussian mixture model, Kernel density estimation. 


2021 ◽  
Vol 7 (5) ◽  
pp. 90
Author(s):  
Slim Hamdi ◽  
Samir Bouindour ◽  
Hichem Snoussi ◽  
Tian Wang ◽  
Mohamed Abid

In recent years, the use of drones for surveillance tasks has been on the rise worldwide. However, in the context of anomaly detection, only normal events are available for the learning process. Therefore, the implementation of a generative learning method in an unsupervised mode to solve this problem becomes fundamental. In this context, we propose a new end-to-end architecture capable of generating optical flow images from original UAV images and extracting compact spatio-temporal characteristics for anomaly detection purposes. It is designed with a custom loss function as a sum of three terms, the reconstruction loss (Rl), the generation loss (Gl) and the compactness loss (Cl) to ensure an efficient classification of the “deep-one” class. In addition, we propose to minimize the effect of UAV motion in video processing by applying background subtraction on optical flow images. We tested our method on very complex datasets called the mini-drone video dataset, and obtained results surpassing existing techniques’ performances with an AUC of 85.3.


Sign in / Sign up

Export Citation Format

Share Document