scholarly journals A Novel Anti-Drift Visual Object Tracking Algorithm Based on Sparse Response and Adaptive Spatial-Temporal Context-Aware

2021 ◽  
Vol 13 (22) ◽  
pp. 4672
Author(s):  
Yinqiang Su ◽  
Jinghong Liu ◽  
Fang Xu ◽  
Xueming Zhang ◽  
Yujia Zuo

Correlation filter (CF) based trackers have gained significant attention in the field of visual single-object tracking, owing to their favorable performance and high efficiency; however, existing trackers still suffer from model drift caused by boundary effects and filter degradation. In visual tracking, long-term occlusion and large appearance variations easily cause model degradation. To remedy these drawbacks, we propose a sparse adaptive spatial-temporal context-aware method that effectively avoids model drift. Specifically, a global context is explicitly incorporated into the correlation filter to mitigate boundary effects. Subsequently, an adaptive temporal regularization constraint is adopted in the filter training stage to avoid model degradation. Meanwhile, a sparse response constraint is introduced to reduce the risk of further model drift. Furthermore, we apply the alternating direction multiplier method (ADMM) to derive a closed-solution of the object function with a low computational cost. In addition, an updating scheme based on the APEC-pool and Peak-pool is proposed to reveal the tracking condition and ensure updates of the target’s appearance model with high-confidence. The Kalam filter is adopted to track the target when the appearance model is persistently unreliable and abnormality occurs. Finally, extensive experimental results on OTB-2013, OTB-2015 and VOT2018 datasets show that our proposed tracker performs favorably against several state-of-the-art trackers.

Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2841
Author(s):  
Khizer Mehmood ◽  
Abdul Jalil ◽  
Ahmad Ali ◽  
Baber Khan ◽  
Maria Murad ◽  
...  

Despite eminent progress in recent years, various challenges associated with object tracking algorithms such as scale variations, partial or full occlusions, background clutters, illumination variations are still required to be resolved with improved estimation for real-time applications. This paper proposes a robust and fast algorithm for object tracking based on spatio-temporal context (STC). A pyramid representation-based scale correlation filter is incorporated to overcome the STC’s inability on the rapid change of scale of target. It learns appearance induced by variations in the target scale sampled at a different set of scales. During occlusion, most correlation filter trackers start drifting due to the wrong update of samples. To prevent the target model from drift, an occlusion detection and handling mechanism are incorporated. Occlusion is detected from the peak correlation score of the response map. It continuously predicts target location during occlusion and passes it to the STC tracking model. After the successful detection of occlusion, an extended Kalman filter is used for occlusion handling. This decreases the chance of tracking failure as the Kalman filter continuously updates itself and the tracking model. Further improvement to the model is provided by fusion with average peak to correlation energy (APCE) criteria, which automatically update the target model to deal with environmental changes. Extensive calculations on the benchmark datasets indicate the efficacy of the proposed tracking method with state of the art in terms of performance analysis.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1129 ◽  
Author(s):  
Jianming Zhang ◽  
Yang Liu ◽  
Hehua Liu ◽  
Jin Wang

Visual object tracking is a significant technology for camera-based sensor networks applications. Multilayer convolutional features comprehensively used in correlation filter (CF)-based tracking algorithms have achieved excellent performance. However, there are tracking failures in some challenging situations because ordinary features are not able to well represent the object appearance variations and the correlation filters are updated irrationally. In this paper, we propose a local–global multiple correlation filters (LGCF) tracking algorithm for edge computing systems capturing moving targets, such as vehicles and pedestrians. First, we construct a global correlation filter model with deep convolutional features, and choose horizontal or vertical division according to the aspect ratio to build two local filters with hand-crafted features. Then, we propose a local–global collaborative strategy to exchange information between local and global correlation filters. This strategy can avoid the wrong learning of the object appearance model. Finally, we propose a time-space peak to sidelobe ratio (TSPSR) to evaluate the stability of the current CF. When the estimated results of the current CF are not reliable, the Kalman filter redetection (KFR) model would be enabled to recapture the object. The experimental results show that our presented algorithm achieves better performances on OTB-2013 and OTB-2015 compared with the other latest 12 tracking algorithms. Moreover, our algorithm handles various challenges in object tracking well.


Author(s):  
Yang Li ◽  
Jianke Zhu ◽  
Steven C.H. Hoi ◽  
Wenjie Song ◽  
Zhefeng Wang ◽  
...  

Most of existing correlation filter-based tracking approaches only estimate simple axis-aligned bounding boxes, and very few of them is capable of recovering the underlying similarity transformation. To tackle this challenging problem, in this paper, we propose a new correlation filter-based tracker with a novel robust estimation of similarity transformation on the large displacements. In order to efficiently search in such a large 4-DoF space in real-time, we formulate the problem into two 2-DoF sub-problems and apply an efficient Block Coordinates Descent solver to optimize the estimation result. Specifically, we employ an efficient phase correlation scheme to deal with both scale and rotation changes simultaneously in log-polar coordinates. Moreover, a variant of correlation filter is used to predict the translational motion individually. Our experimental results demonstrate that the proposed tracker achieves very promising prediction performance compared with the state-of-the-art visual object tracking methods while still retaining the advantages of high efficiency and simplicity in conventional correlation filter-based tracking methods.


Author(s):  
Tianyang Xu ◽  
Zhenhua Feng ◽  
Xiao-Jun Wu ◽  
Josef Kittler

AbstractDiscriminative Correlation Filters (DCF) have been shown to achieve impressive performance in visual object tracking. However, existing DCF-based trackers rely heavily on learning regularised appearance models from invariant image feature representations. To further improve the performance of DCF in accuracy and provide a parsimonious model from the attribute perspective, we propose to gauge the relevance of multi-channel features for the purpose of channel selection. This is achieved by assessing the information conveyed by the features of each channel as a group, using an adaptive group elastic net inducing independent sparsity and temporal smoothness on the DCF solution. The robustness and stability of the learned appearance model are significantly enhanced by the proposed method as the process of channel selection performs implicit spatial regularisation. We use the augmented Lagrangian method to optimise the discriminative filters efficiently. The experimental results obtained on a number of well-known benchmarking datasets demonstrate the effectiveness and stability of the proposed method. A superior performance over the state-of-the-art trackers is achieved using less than $$10\%$$ 10 % deep feature channels.


2021 ◽  
pp. 85-127
Author(s):  
Weiwei Xing ◽  
Weibin Liu ◽  
Jun Wang ◽  
Shunli Zhang ◽  
Lihui Wang ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document