Moving object detection in real-time visual surveillance using background subtraction technique

Author(s):  
Dileep Kumar Yadav ◽  
Lavanya Sharma ◽  
Sunil Kumar Bharti
Author(s):  
A. Roshan ◽  
Y. Zhang

<p><strong>Abstract.</strong> Background subtraction-based techniques of moving object detection are very common in computer vision programs. Each technique of background subtraction employs image thresholding algorithms. Different thresholding methods generate varying threshold values that provide dissimilar moving object detection results. A majority of background subtraction techniques use grey images which reduce the computational cost but statistics-based image thresholding methods do not consider the spatial distribution of pixels. In this study, authors have developed a background subtraction technique using Lab colour space and used spatial correlations for image thresholding. Four thresholding methods using spatial correlation are developed by computing the difference between opposite colour pairs of background and foreground frames. Out of 9 indoor and outdoor scenes, the object is detected successfully in 7 scenes whereas existing background subtraction technique using grey images with commonly used thresholding methods detected moving objects in 1–5 scenes. Shape and boundaries of detected objects are also better defined using the developed technique.</p>


2020 ◽  
Vol 30 (04) ◽  
pp. 2050016 ◽  
Author(s):  
Danilo Avola ◽  
Marco Bernardi ◽  
Luigi Cinque ◽  
Cristiano Massaroni ◽  
Gian Luca Foresti

Moving object detection in video streams plays a key role in many computer vision applications. In particular, separation between background and foreground items represents a main prerequisite to carry out more complex tasks, such as object classification, vehicle tracking, and person re-identification. Despite the progress made in recent years, a main challenge of moving object detection still regards the management of dynamic aspects, including bootstrapping and illumination changes. In addition, the recent widespread of Pan–Tilt–Zoom (PTZ) cameras has made the management of these aspects even more complex in terms of performance due to their mixed movements (i.e. pan, tilt, and zoom). In this paper, a combined keypoint clustering and neural background subtraction method, based on Self-Organized Neural Network (SONN), for real-time moving object detection in video sequences acquired by PTZ cameras is proposed. Initially, the method performs a spatio-temporal tracking of the sets of moving keypoints to recognize the foreground areas and to establish the background. Then, it adopts a neural background subtraction, localized in these areas, to accomplish a foreground detection able to manage bootstrapping and gradual illumination changes. Experimental results on three well-known public datasets, and comparisons with different key works of the current literature, show the efficiency of the proposed method in terms of modeling and background subtraction.


Sign in / Sign up

Export Citation Format

Share Document