scholarly journals Low-Rank Multi-Channel Features for Robust Visual Object Tracking

Symmetry ◽  
2019 ◽  
Vol 11 (9) ◽  
pp. 1155 ◽  
Author(s):  
Fawad ◽  
Muhammad Jamil Khan ◽  
MuhibUr Rahman ◽  
Yasar Amin ◽  
Hannu Tenhunen

Kernel correlation filters (KCF) demonstrate significant potential in visual object tracking by employing robust descriptors. Proper selection of color and texture features can provide robustness against appearance variations. However, the use of multiple descriptors would lead to a considerable feature dimension. In this paper, we propose a novel low-rank descriptor, that provides better precision and success rate in comparison to state-of-the-art trackers. We accomplished this by concatenating the magnitude component of the Overlapped Multi-oriented Tri-scale Local Binary Pattern (OMTLBP), Robustness-Driven Hybrid Descriptor (RDHD), Histogram of Oriented Gradients (HoG), and Color Naming (CN) features. We reduced the rank of our proposed multi-channel feature to diminish the computational complexity. We formulated the Support Vector Machine (SVM) model by utilizing the circulant matrix of our proposed feature vector in the kernel correlation filter. The use of discrete Fourier transform in the iterative learning of SVM reduced the computational complexity of our proposed visual tracking algorithm. Extensive experimental results on Visual Tracker Benchmark dataset show better accuracy in comparison to other state-of-the-art trackers.

Sensors ◽  
2020 ◽  
Vol 20 (14) ◽  
pp. 4021 ◽  
Author(s):  
Mustansar Fiaz ◽  
Arif Mahmood ◽  
Soon Ki Jung

We propose to improve the visual object tracking by introducing a soft mask based low-level feature fusion technique. The proposed technique is further strengthened by integrating channel and spatial attention mechanisms. The proposed approach is integrated within a Siamese framework to demonstrate its effectiveness for visual object tracking. The proposed soft mask is used to give more importance to the target regions as compared to the other regions to enable effective target feature representation and to increase discriminative power. The low-level feature fusion improves the tracker robustness against distractors. The channel attention is used to identify more discriminative channels for better target representation. The spatial attention complements the soft mask based approach to better localize the target objects in challenging tracking scenarios. We evaluated our proposed approach over five publicly available benchmark datasets and performed extensive comparisons with 39 state-of-the-art tracking algorithms. The proposed tracker demonstrates excellent performance compared to the existing state-of-the-art trackers.


2021 ◽  
Vol 11 (4) ◽  
pp. 1963
Author(s):  
Shanshan Luo ◽  
Baoqing Li ◽  
Xiaobing Yuan ◽  
Huawei Liu

The Discriminative Correlation Filter (DCF) has been universally recognized in visual object tracking, thanks to its excellent accuracy and high speed. Nevertheless, these DCF-based trackers perform poorly in long-term tracking. The reasons include the following aspects—first, they have low adaptability to significant appearance changes in long-term tracking and are prone to tracking failure; second, these trackers lack a practical re-detection module to find the target again after tracking failure. In our work, we propose a new long-term tracking strategy to solve these issues. First, we make the best of the static and dynamic information of the target by introducing the motion features to our long-term tracker and obtain a more robust tracker. Second, we introduce a low-rank sparse dictionary learning method for re-detection. This re-detection module can exploit a correlation among these training samples and alleviate the impact of occlusion and noise. Third, we propose a new reliability evaluation method to model an adaptive update, which can switch expediently between the tracking module and the re-detection module. Massive experiments demonstrate that our proposed approach has an obvious improvement in precision and success rate over these state-of-the-art trackers.


Sensors ◽  
2019 ◽  
Vol 19 (2) ◽  
pp. 387 ◽  
Author(s):  
Ming Du ◽  
Yan Ding ◽  
Xiuyun Meng ◽  
Hua-Liang Wei ◽  
Yifan Zhao

In recent years, regression trackers have drawn increasing attention in the visual-object tracking community due to their favorable performance and easy implementation. The tracker algorithms directly learn mapping from dense samples around the target object to Gaussian-like soft labels. However, in many real applications, when applied to test data, the extreme imbalanced distribution of training samples usually hinders the robustness and accuracy of regression trackers. In this paper, we propose a novel effective distractor-aware loss function to balance this issue by highlighting the significant domain and by severely penalizing the pure background. In addition, we introduce a full differentiable hierarchy-normalized concatenation connection to exploit abstractions across multiple convolutional layers. Extensive experiments were conducted on five challenging benchmark-tracking datasets, that is, OTB-13, OTB-15, TC-128, UAV-123, and VOT17. The experimental results are promising and show that the proposed tracker performs much better than nearly all the compared state-of-the-art approaches.


Sensors ◽  
2021 ◽  
Vol 21 (19) ◽  
pp. 6388
Author(s):  
Jia Chen ◽  
Fan Wang ◽  
Yingjie Zhang ◽  
Yibo Ai ◽  
Weidong Zhang

Visual tracking task is divided into classification and regression tasks, and manifold features are introduced to improve the performance of the tracker. Although the previous anchor-based tracker has achieved superior tracking performance, the anchor-based tracker not only needs to set parameters manually but also ignores the influence of the geometric characteristics of the object on the tracker performance. In this paper, we propose a novel Siamese network framework with ResNet50 as the backbone, which is an anchor-free tracker based on manifold features. The network design is simple and easy to understand, which not only considers the influence of geometric features on the target tracking performance but also reduces the calculation of parameters and improves the target tracking performance. In the experiment, we compared our tracker with the most advanced public benchmarks and obtained a state-of-the-art performance.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-16
Author(s):  
Jinping Sun

The target and background will change continuously in the long-term tracking process, which brings great challenges to the accurate prediction of targets. The correlation filter algorithm based on manual features is difficult to meet the actual needs due to its limited feature representation ability. Thus, to improve the tracking performance and robustness, an improved hierarchical convolutional features model is proposed into a correlation filter framework for visual object tracking. First, the objective function is designed by lasso regression modeling, and a sparse, time-series low-rank filter is learned to increase the interpretability of the model. Second, the features of the last layer and the second pool layer of the convolutional neural network are extracted to realize the target position prediction from coarse to fine. In addition, using the filters learned from the first frame and the current frame to calculate the response maps, respectively, the target position is obtained by finding the maximum response value in the response map. The filter model is updated only when these two maximum responses meet the threshold condition. The proposed tracker is evaluated by simulation analysis on TC-128/OTB2015 benchmarks including more than 100 video sequences. Extensive experiments demonstrate that the proposed tracker achieves competitive performance against state-of-the-art trackers. The distance precision rate and overlap success rate of the proposed algorithm on OTB2015 are 0.829 and 0.695, respectively. The proposed algorithm effectively solves the long-term object tracking problem in complex scenes.


Sign in / Sign up

Export Citation Format

Share Document