optical flow features
Recently Published Documents


TOTAL DOCUMENTS

21
(FIVE YEARS 14)

H-INDEX

4
(FIVE YEARS 2)

Author(s):  
Hongwei Ren ◽  
Chenghao Li ◽  
Xinyi Zhang ◽  
Chenchen Ding ◽  
Changhai Man ◽  
...  

Author(s):  
Mesyella ◽  
Timotius Ivan Casey ◽  
Edward Susanto ◽  
Irene Anindaputri Iswanto

2020 ◽  
Vol 14 (13) ◽  
pp. 1845-1854
Author(s):  
Yuan Hu ◽  
Hubert P. H. Shum ◽  
Edmond S. L. Ho

Sensors ◽  
2019 ◽  
Vol 19 (23) ◽  
pp. 5142 ◽  
Author(s):  
Dong Liang ◽  
Jiaxing Pan ◽  
Han Sun ◽  
Huiyu Zhou

Foreground detection is an important theme in video surveillance. Conventional background modeling approaches build sophisticated temporal statistical model to detect foreground based on low-level features, while modern semantic/instance segmentation approaches generate high-level foreground annotation, but ignore the temporal relevance among consecutive frames. In this paper, we propose a Spatio-Temporal Attention Model (STAM) for cross-scene foreground detection. To fill the semantic gap between low and high level features, appearance and optical flow features are synthesized by attention modules via the feature learning procedure. Experimental results on CDnet 2014 benchmarks validate it and outperformed many state-of-the-art methods in seven evaluation metrics. With the attention modules and optical flow, its F-measure increased 9 % and 6 % respectively. The model without any tuning showed its cross-scene generalization on Wallflower and PETS datasets. The processing speed was 10.8 fps with the frame size 256 by 256.


Sign in / Sign up

Export Citation Format

Share Document