scholarly journals Coupled-Region Visual Tracking Formulation Based on a Discriminative Correlation Filter Bank

Electronics ◽  
2018 ◽  
Vol 7 (10) ◽  
pp. 244 ◽  
Author(s):  
Jian Wei ◽  
Feng Liu

The visual tracking algorithm based on discriminative correlation filter (DCF) has shown excellent performance in recent years, especially as the higher tracking speed meets the real-time requirement of object tracking. However, when the target is partially occluded, the traditional single discriminative correlation filter will not be able to effectively learn information reliability, resulting in tracker drift and even failure. To address this issue, this paper proposes a novel tracking-by-detection framework, which uses multiple discriminative correlation filters called discriminative correlation filter bank (DCFB), corresponding to different target sub-regions and global region patches to combine and optimize the final correlation output in the frequency domain. In tracking, the sub-region patches are zero-padded to the same size as the global target region, which can effectively avoid noise aliasing during correlation operation, thereby improving the robustness of the discriminative correlation filter. Considering that the sub-region target motion model is constrained by the global target region, adding the global region appearance model to our framework will completely preserve the intrinsic structure of the target, thus effectively utilizing the discriminative information of the visible sub-region to mitigate tracker drift when partial occlusion occurs. In addition, an adaptive scale estimation scheme is incorporated into our algorithm to make the tracker more robust against potential challenging attributes. The experimental results from the OTB-2015 and VOT-2015 datasets demonstrate that our method performs favorably compared with several state-of-the-art trackers.

2020 ◽  
Vol 100 ◽  
pp. 107157 ◽  
Author(s):  
Bo Huang ◽  
Tingfa Xu ◽  
Jianan Li ◽  
Ziyi Shen ◽  
Yiwen Chen

2018 ◽  
Vol 22 (2) ◽  
pp. 791-805 ◽  
Author(s):  
Guoxia Xu ◽  
Hu Zhu ◽  
Lizhen Deng ◽  
Lixin Han ◽  
Yujie Li ◽  
...  

Author(s):  
Lina Gao ◽  
Bing Liu ◽  
Ping Fu ◽  
Mingzhu Xu ◽  
Junbao Li

Author(s):  
Libin Xu ◽  
Pyoungwon Kim ◽  
Mengjie Wang ◽  
Jinfeng Pan ◽  
Xiaomin Yang ◽  
...  

AbstractThe discriminative correlation filter (DCF)-based tracking methods have achieved remarkable performance in visual tracking. However, the existing DCF paradigm still suffers from dilemmas such as boundary effect, filter degradation, and aberrance. To address these problems, we propose a spatio-temporal joint aberrance suppressed regularization (STAR) correlation filter tracker under a unified framework of response map. Specifically, a dynamic spatio-temporal regularizer is introduced into the DCF to alleviate the boundary effect and filter degradation, simultaneously. Meanwhile, an aberrance suppressed regularizer is exploited to reduce the interference of background clutter. The proposed STAR model is effectively optimized using the alternating direction method of multipliers (ADMM). Finally, comprehensive experiments on TC128, OTB2013, OTB2015 and UAV123 benchmarks demonstrate that the STAR tracker achieves compelling performance compared with the state-of-the-art (SOTA) trackers.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Yun Liang ◽  
Dong Wang ◽  
Yijin Chen ◽  
Lei Xiao ◽  
Caixing Liu

This paper proposes a new visual tracking method by constructing the robust appearance model of the target with convolutional sparse coding. First, our method uses convolutional sparse coding to divide the interest region of the target into a smooth image and four detail images with different fitting degrees. Second, we compute the initial target region by tracking the smooth image with the kernel correlation filtering. We define an appearance model to describe the details of the target based on the initial target region and the combination of four detail images. Third, we propose a matching method by the overlap rate and Euclidean distance to evaluate candidates and the appearance model to compute the tracking results based on detail images. Finally, the two tracking results are separately computed by the smooth image, and the detail images are combined to produce the final target rectangle. Many experiments on videos from Tracking Benchmark 2015 demonstrate that our method produces much better results than most of the present visual tracking methods.


Sign in / Sign up

Export Citation Format

Share Document