Robust real-time visual object tracking via multi-scale fully convolutional Siamese networks

2018 ◽  
Vol 77 (17) ◽  
pp. 22131-22143 ◽  
Author(s):  
Longchao Yang ◽  
Peilin Jiang ◽  
Fei Wang ◽  
Xuan Wang
Author(s):  
Zheng Zhu ◽  
Qiang Wang ◽  
Bo Li ◽  
Wei Wu ◽  
Junjie Yan ◽  
...  

2014 ◽  
Vol 75 (4) ◽  
pp. 2393-2409 ◽  
Author(s):  
Zebin Cai ◽  
Zhenghui Gu ◽  
Zhu Liang Yu ◽  
Hao Liu ◽  
Ke Zhang

Sensors ◽  
2019 ◽  
Vol 19 (10) ◽  
pp. 2362 ◽  
Author(s):  
Yijin Yang ◽  
Yihong Zhang ◽  
Demin Li ◽  
Zhijie Wang

Correlation filter-based methods have recently performed remarkably well in terms of accuracy and speed in the visual object tracking research field. However, most existing correlation filter-based methods are not robust to significant appearance changes in the target, especially when the target undergoes deformation, illumination variation, and rotation. In this paper, a novel parallel correlation filters (PCF) framework is proposed for real-time visual object tracking. Firstly, the proposed method constructs two parallel correlation filters, one for tracking the appearance changes in the target, and the other for tracking the translation of the target. Secondly, through weighted merging the response maps of these two parallel correlation filters, the proposed method accurately locates the center position of the target. Finally, in the training stage, a new reasonable distribution of the correlation output is proposed to replace the original Gaussian distribution to train more accurate correlation filters, which can prevent the model from drifting to achieve excellent tracking performance. The extensive qualitative and quantitative experiments on the common object tracking benchmarks OTB-2013 and OTB-2015 have demonstrated that the proposed PCF tracker outperforms most of the state-of-the-art trackers and achieves a high real-time tracking performance.


Electronics ◽  
2019 ◽  
Vol 8 (10) ◽  
pp. 1084 ◽  
Author(s):  
Dong-Hyun Lee

The visual object tracking problem seeks to track an arbitrary object in a video, and many deep convolutional neural network-based algorithms have achieved significant performance improvements in recent years. However, most of them do not guarantee real-time operation due to the large computation overhead for deep feature extraction. This paper presents a single-crop visual object tracking algorithm based on a fully convolutional Siamese network (SiamFC). The proposed algorithm significantly reduces the computation burden by extracting multiple scale feature maps from a single image crop. Experimental results show that the proposed algorithm demonstrates superior speed performance in comparison with that of SiamFC.


Sign in / Sign up

Export Citation Format

Share Document