Frame Size Reduction for Foreground Detection in Video Sequences

Author(s):  
Miguel A. Molina-Cabello ◽  
Ezequiel López-Rubio ◽  
Rafael Marcos Luque-Baena ◽  
Esteban J. Palomo ◽  
Enrique Domínguez
Sensors ◽  
2019 ◽  
Vol 19 (23) ◽  
pp. 5142 ◽  
Author(s):  
Dong Liang ◽  
Jiaxing Pan ◽  
Han Sun ◽  
Huiyu Zhou

Foreground detection is an important theme in video surveillance. Conventional background modeling approaches build sophisticated temporal statistical model to detect foreground based on low-level features, while modern semantic/instance segmentation approaches generate high-level foreground annotation, but ignore the temporal relevance among consecutive frames. In this paper, we propose a Spatio-Temporal Attention Model (STAM) for cross-scene foreground detection. To fill the semantic gap between low and high level features, appearance and optical flow features are synthesized by attention modules via the feature learning procedure. Experimental results on CDnet 2014 benchmarks validate it and outperformed many state-of-the-art methods in seven evaluation metrics. With the attention modules and optical flow, its F-measure increased 9 % and 6 % respectively. The model without any tuning showed its cross-scene generalization on Wallflower and PETS datasets. The processing speed was 10.8 fps with the frame size 256 by 256.


Author(s):  
Francisco Javier Lopez-Rubio ◽  
Ezequiel Lopez-Rubio ◽  
Rafael Marcos Luque-Baena ◽  
Enrique Dominguez ◽  
Esteban J. Palomo

2003 ◽  
Vol 16 (3) ◽  
pp. 401-414 ◽  
Author(s):  
Irini Reljin ◽  
Branimir Reljin

The paper considers compressed video streams from the fractal and multifractal (MF) points of view. Video traces in H.263 and MPEG-4 formats generated at the Technical University Berlin and publicly available, were investigated. It was shown that all compressed videos exhibit fractal (long-range dependency) nature and that higher compression ratios provoke more variability of the encoded video stream. This conclusion is approved from the MF spectra of frame size video traces. By analyzing individual frames and their MF spectra the additive nature is approved.


2019 ◽  
Vol 125 ◽  
pp. 481-487 ◽  
Author(s):  
Jorge García-González ◽  
Juan M. Ortiz-de-Lazcano-Lobato ◽  
Rafael M. Luque-Baena ◽  
Miguel A. Molina-Cabello ◽  
Ezequiel López-Rubio

2011 ◽  
Vol 21 (03) ◽  
pp. 225-246 ◽  
Author(s):  
EZEQUIEL LÓPEZ-RUBIO ◽  
RAFAEL MARCOS LUQUE-BAENA ◽  
ENRIQUE DOMÍNGUEZ

Background modeling and foreground detection are key parts of any computer vision system. These problems have been addressed in literature with several probabilistic approaches based on mixture models. Here we propose a new kind of probabilistic background models which is based on probabilistic self-organising maps. This way, the background pixels are modeled with more flexibility. On the other hand, a statistical correlation measure is used to test the similarity among nearby pixels, so as to enhance the detection performance by providing a feedback to the process. Several well known benchmark videos have been used to assess the relative performance of our proposal with respect to traditional neural and non neural based methods, with favourable results, both qualitatively and quantitatively. A statistical analysis of the differences among methods demonstrates that our method is significantly better than its competitors. This way, a strong alternative to classical methods is presented.


2021 ◽  
Vol 2021 ◽  
pp. 1-14
Author(s):  
Qianxia Cao ◽  
Zhengwu Wang ◽  
Kejun Long

In complex urban intersection scenarios, due to heavy traffic and signal control, there are many slow-moving or temporarily stopped vehicles behind the stop lines. At these intersections, it is difficult to extract traffic parameters, such as delay and queue length, based on vehicle detection and tracking due to the dense and severe occlusion of vehicles. In this study, a novel background subtraction algorithm based on sparse representation is proposed to detect the traffic foreground at complex intersections to obtain traffic parameters. By establishing a novel background dictionary update model, the proposed method solves the problem that the background is easily contaminated by slow-moving or temporarily stopped vehicles and therefore cannot obtain the complete traffic foreground. Using the real-world urban traffic videos and the PV video sequences of i -LIDS, we first compare the proposed method with other detection methods based on sparse representation. Then, the proposed method is compared with other commonly used traffic foreground detection models in different urban intersection traffic scenarios. The experimental results show that the proposed method performs well in keeping the background model being unpolluted from slow-moving or temporarily stopped vehicles and has a good performance in both qualitative and quantitative evaluations.


Sign in / Sign up

Export Citation Format

Share Document