scholarly journals Spatio-temporal joint aberrance suppressed correlation filter for visual tracking

Author(s):  
Libin Xu ◽  
Pyoungwon Kim ◽  
Mengjie Wang ◽  
Jinfeng Pan ◽  
Xiaomin Yang ◽  
...  

AbstractThe discriminative correlation filter (DCF)-based tracking methods have achieved remarkable performance in visual tracking. However, the existing DCF paradigm still suffers from dilemmas such as boundary effect, filter degradation, and aberrance. To address these problems, we propose a spatio-temporal joint aberrance suppressed regularization (STAR) correlation filter tracker under a unified framework of response map. Specifically, a dynamic spatio-temporal regularizer is introduced into the DCF to alleviate the boundary effect and filter degradation, simultaneously. Meanwhile, an aberrance suppressed regularizer is exploited to reduce the interference of background clutter. The proposed STAR model is effectively optimized using the alternating direction method of multipliers (ADMM). Finally, comprehensive experiments on TC128, OTB2013, OTB2015 and UAV123 benchmarks demonstrate that the STAR tracker achieves compelling performance compared with the state-of-the-art (SOTA) trackers.

2019 ◽  
Vol 9 (7) ◽  
pp. 1338 ◽  
Author(s):  
Bin Zhou ◽  
Tuo Wang

Accurate visual tracking is a challenging issue in computer vision. Correlation filter (CF) based methods are sought in visual tracking based on their efficiency and high performance. Nonetheless, traditional CF-based trackers have insufficient context information, and easily drift in scenes of fast motion or background clutter. Moreover, CF-based trackers are sensitive to partial occlusion, which may reduce their overall performance and even lead to failure in tracking challenge. In this paper, we presented an adaptive context-aware (CA) and structural correlation filter for tracking. Firstly, we propose a novel context selecting strategy to obtain negative samples. Secondly, to gain robustness against partial occlusion, we construct a structural correlation filter by learning both the holistic and local models. Finally, we introduce an adaptive updating scheme by using a fluctuation parameter. Extensive comprehensive experiments on object tracking benchmark (OTB)-100 datasets demonstrate that our proposed tracker performs favorably against several state-of-the-art trackers.


Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 889
Author(s):  
Hang Chen ◽  
Weiguo Zhang ◽  
Danghui Yan

Object information significantly affects the performance of visual tracking. However, it is difficult to obtain accurate target foreground information because of the existence of challenging scenarios, such as occlusion, background clutter, drastic change of appearance, and so forth. Traditional correlation filter methods roughly use linear interpolation to update the model, which may lead to the introduction of noise and the loss of reliable target information, resulting in the degradation of tracking performance. In this paper, we propose a novel robust visual tracking framework with reliable object information and Kalman filter (KF). Firstly, we analyze the reliability of the tracking process, calculate the confidence of the target information at the current estimated location, and determine whether it is necessary to carry out the online training and update step. Secondly, we also model the target motion between frames with a KF module, and use it to supplement the correlation filter estimation. Finally, in order to keep the most reliable target information of the first frame in the whole tracking process, we propose a new online training method, which can improve the robustness of the tracker. Extensive experiments on several benchmarks demonstrate the effectiveness and robustness of our proposed method, and our method achieves a comparable or better performance compared with several other state-of-the-art trackers.


2021 ◽  
Vol 11 (18) ◽  
pp. 8427
Author(s):  
Peiting Gu ◽  
Peizhong Liu ◽  
Jianhua Deng ◽  
Zhi Chen

Discriminative correlation filter (DCF) based tracking algorithms have obtained prominent speed and accuracy strengths, which have attracted extensive attention and research. However, some unavoidable deficiencies still exist. For example, the circulant shifted sampling process is likely to cause repeated periodic assumptions and cause boundary effects, which degrades the tracker’s discriminative performance, and the target is not easy to locate in complex appearance changes. In this paper, a spatial–temporal regularization module based on BACF (background-aware correlation filter) framework is proposed, which is performed by introducing a temporal regularization to deal effectively with the boundary effects issue. At the same time, the accuracy of target recognition is improved. This model can be effectively optimized by employing the alternating direction multiplier (ADMM) method, and each sub-problem has a corresponding closed solution. In addition, in terms of feature representation, we combine traditional hand-crafted features with deep convolution features linearly enhance the discriminative performance of the filter. Considerable experiments on multiple well-known benchmarks show the proposed algorithm is performs favorably against many state-of-the-art trackers and achieves an AUC score of 64.4% on OTB-100.


2021 ◽  
Vol 436 ◽  
pp. 273-282
Author(s):  
Youmin Yan ◽  
Xixian Guo ◽  
Jin Tang ◽  
Chenglong Li ◽  
Xin Wang

2020 ◽  
Vol 100 ◽  
pp. 107157 ◽  
Author(s):  
Bo Huang ◽  
Tingfa Xu ◽  
Jianan Li ◽  
Ziyi Shen ◽  
Yiwen Chen

Electronics ◽  
2018 ◽  
Vol 7 (10) ◽  
pp. 244 ◽  
Author(s):  
Jian Wei ◽  
Feng Liu

The visual tracking algorithm based on discriminative correlation filter (DCF) has shown excellent performance in recent years, especially as the higher tracking speed meets the real-time requirement of object tracking. However, when the target is partially occluded, the traditional single discriminative correlation filter will not be able to effectively learn information reliability, resulting in tracker drift and even failure. To address this issue, this paper proposes a novel tracking-by-detection framework, which uses multiple discriminative correlation filters called discriminative correlation filter bank (DCFB), corresponding to different target sub-regions and global region patches to combine and optimize the final correlation output in the frequency domain. In tracking, the sub-region patches are zero-padded to the same size as the global target region, which can effectively avoid noise aliasing during correlation operation, thereby improving the robustness of the discriminative correlation filter. Considering that the sub-region target motion model is constrained by the global target region, adding the global region appearance model to our framework will completely preserve the intrinsic structure of the target, thus effectively utilizing the discriminative information of the visible sub-region to mitigate tracker drift when partial occlusion occurs. In addition, an adaptive scale estimation scheme is incorporated into our algorithm to make the tracker more robust against potential challenging attributes. The experimental results from the OTB-2015 and VOT-2015 datasets demonstrate that our method performs favorably compared with several state-of-the-art trackers.


Sensors ◽  
2019 ◽  
Vol 19 (22) ◽  
pp. 4904 ◽  
Author(s):  
Yeongbin Kim ◽  
Joongchol Shin ◽  
Hasil Park ◽  
Joonki Paik

Online training framework based on discriminative correlation filters for visual tracking has recently shown significant improvement in both accuracy and speed. However, correlation filter-base discriminative approaches have a common problem of tracking performance degradation when the local structure of a target is distorted by the boundary effect problem. The shape distortion of the target is mainly caused by the circulant structure in the Fourier domain processing, and it makes the correlation filter learn distorted training samples. In this paper, we present a structure–attention network to preserve the target structure from the structure distortion caused by the boundary effect. More specifically, we adopt a variational auto-encoder as a structure–attention network to make various and representative target structures. We also proposed two denoising criteria using a novel reconstruction loss for variational auto-encoding framework to capture more robust structures even under the boundary condition. Through the proposed structure–attention framework, discriminative correlation filters can learn robust structure information of targets during online training with an enhanced discriminating performance and adaptability. Experimental results on major visual tracking benchmark datasets show that the proposed method produces a better or comparable performance compared with the state-of-the-art tracking methods with a real-time processing speed of more than 80 frames per second.


2018 ◽  
Vol 22 (2) ◽  
pp. 791-805 ◽  
Author(s):  
Guoxia Xu ◽  
Hu Zhu ◽  
Lizhen Deng ◽  
Lixin Han ◽  
Yujie Li ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document