scholarly journals Foreground Segmentation in Video Sequences with a Dynamic Background

Author(s):  
Chu Tang ◽  
M. Omair Ahmad ◽  
Chunyan Wang
Author(s):  
K. Anuradha ◽  
N.R. Raajan

<p>Video processing has gained a lot of significance because of its applications in various areas of research. This includes monitoring movements in public places for surveillance. Video sequences from various standard datasets such as I2R, CAVIAR and UCSD are often referred for video processing applications and research. Identification of actors as well as the movements in video sequences should be accomplished with the static and dynamic background. The significance of research in video processing lies in identifying the foreground movement of actors and objects in video sequences. Foreground identification can be done with a static or dynamic background. This type of identification becomes complex while detecting the movements in video sequences with a dynamic background. For identification of foreground movement in video sequences with dynamic background, two algorithms are proposed in this article. The algorithms are termed as Frame Difference between Neighboring Frames using Hue, Saturation and Value (FDNF-HSV) and Frame Difference between Neighboring Frames using Greyscale (FDNF-G). With regard to F-measure, recall and precision, the proposed algorithms are evaluated with state-of-art techniques. Results of evaluation show that, the proposed algorithms have shown enhanced performance.</p>


2021 ◽  
pp. 388-397
Author(s):  
Jorge García-Gozález ◽  
Juan Miguel Ortiz-de-Lazcano-Lobato ◽  
Rafael Marcos Luque-Baena ◽  
Ezequiel López-Rubio

Author(s):  
Mohammed Lahraichi ◽  
Khalid Housni ◽  
Samir Mbarki

In the recent decades, several methods have been developed to extract moving objects in the presence of dynamic background. However, most of them use a global threshold, and ignore the correlation between neighboring pixels. To address these issues, this paper presents a new approach to generate a probability image based on Kernel Density Estimation (KDE) method, and then apply the Maximum A Posteriori in the Markov Random Field (MAP-MRF) based on probability image, so as to generate an energy function, this function will be minimized by the binary graph cut algorithm to detect the moving pixels instead of applying a thresholding step. The proposed method was tested on various video sequences, and the obtained results showed its effectiveness in presence of a dynamic scene, compared to other background subtraction models.


2019 ◽  
Vol 12 (2) ◽  
pp. 145-155 ◽  
Author(s):  
Satrughan Kumar ◽  
◽  
Jigyendra Yadav ◽  
Kumar Manoj ◽  
Subramaniam Rajasekaran ◽  
...  

Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4484 ◽  
Author(s):  
Víctor García Rubio ◽  
Juan Antonio Rodrigo Ferrán ◽  
Jose Manuel Menéndez García ◽  
Nuria Sánchez Almodóvar ◽  
José María Lalueza Mayordomo ◽  
...  

In recent years, the use of unmanned aerial vehicles (UAVs) for surveillance tasks has increased considerably. This technology provides a versatile and innovative approach to the field. However, the automation of tasks such as object recognition or change detection usually requires image processing techniques. In this paper we present a system for change detection in video sequences acquired by moving cameras. It is based on the combination of image alignment techniques with a deep learning model based on convolutional neural networks (CNNs). This approach covers two important topics. Firstly, the capability of our system to be adaptable to variations in the UAV flight. In particular, the difference of height between flights, and a slight modification of the camera’s position or movement of the UAV because of natural conditions such as the effect of wind. These modifications can be produced by multiple factors, such as weather conditions, security requirements or human errors. Secondly, the precision of our model to detect changes in diverse environments, which has been compared with state-of-the-art methods in change detection. This has been measured using the Change Detection 2014 dataset, which provides a selection of labelled images from different scenarios for training change detection algorithms. We have used images from dynamic background, intermittent object motion and bad weather sections. These sections have been selected to test our algorithm’s robustness to changes in the background, as in real flight conditions. Our system provides a precise solution for these scenarios, as the mean F-measure score from the image analysis surpasses 97%, and a significant precision in the intermittent object motion category, where the score is above 99%.


Sign in / Sign up

Export Citation Format

Share Document