scholarly journals Visual Object Tracking with Online Updating for Car Sharing Services

2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Zhou Zhu ◽  
Haifeng Zhao ◽  
Fang Hui ◽  
Yan Zhang

In this paper, we address the problem of online updating of visual object tracker for car sharing services. The key idea is to adjust the updating rate adaptively according to the tracking performance of the current frame. Instead of setting a fixed weight for all the frames in the updating of the object model, we assign the current frame a larger weight if its corresponding tracking result is relatively accurate and unbroken and a smaller weight on the contrary. To implement it, the current estimated bounding box’s intersection over union (IOU) is calculated by an IOU predictor which is trained offline on a large number of image pairs and used as a guidance to adjust the updating weights online. Finally, we imbed the proposed model update strategy in a lightweight baseline tracker. Experiment results on both traffic and nontraffic datasets verify that though the error of predicted IOU is inevitable, the proposed method can still improve the accuracy of object tracking compared with the baseline object tracker.

2021 ◽  
Author(s):  
Shaolong Chen ◽  
Changzhen Qiu ◽  
Yurong Huang ◽  
Zhiyong Zhang

Abstract In the visual object tracking, the tracking algorithm based on discriminative model prediction have shown favorable performance in recent years. Probabilistic discriminative model prediction (PrDiMP) is a typical tracker based on discriminative model prediction. The PrDiMP evaluates tracking results through output of the tracker to guide online update of the model. However, the tracker output is not always reliable, especially in the case of fast motion, occlusion or background clutter. Simply using the output of the tracker to guide the model update can easily lead to drift. In this paper, we present a robust model update strategy which can effectively integrate maximum response, multi-peaks and detector cues to guide model update of PrDiMP. Furthermore, we have analyzed the impact of different model update strategies on the performance of PrDiMP. Extensive experiments and comparisons with state-of-the-art trackers on the four benchmarks of VOT2018, VOT2019, NFS and OTB100 have proved the effectiveness and advancement of our algorithm.


Filomat ◽  
2020 ◽  
Vol 34 (15) ◽  
pp. 5139-5148
Author(s):  
Yan Zhou ◽  
Hongwei Guo ◽  
Dongli Wang ◽  
Chunjiang Liao

The efficient convolution operator (ECO) have manifested predominant results in visual object tracking. However, in the pursuit of performance improvement, the computational burden of the tracker becomes heavy, and the importance of different feature layers is not considered. In this paper, we propose a self-adaptive mechanism for regulating the training process in the first frame. To overcome the over-fitting in the tracking process, we adopt the fuzzy model update strategy. Moreover, we weight different feature maps to enhance the tracker performance. Comprehensive experiments have conducted on the OTB-2013 dataset. When adopting our ideas to adjust our tracker, the self-adaptive mechanism can avoid unnecessary training iterations, and the fuzzy update strategy reduces one fifth tracking computation compared to the ECO. Within reduced computation, the tracker based our idea incurs less than 1% loss in AUC (area-undercurve).


2013 ◽  
Vol 2013 ◽  
pp. 1-7 ◽  
Author(s):  
Ming-Xin Jiang ◽  
Min Li ◽  
Hong-Yu Wang

We present a novel visual object tracking algorithm based on two-dimensional principal component analysis (2DPCA) and maximum likelihood estimation (MLE). Firstly, we introduce regularization into the 2DPCA reconstruction and develop an iterative algorithm to represent an object by 2DPCA bases. Secondly, the model of sparsity constrained MLE is established. Abnormal pixels in the samples will be assigned with low weights to reduce their effects on the tracking algorithm. The object tracking results are obtained by using Bayesian maximum a posteriori (MAP) probability estimation. Finally, to further reduce tracking drift, we employ a template update strategy which combines incremental subspace learning and the error matrix. This strategy adapts the template to the appearance change of the target and reduces the influence of the occluded target template as well. Compared with other popular methods, our method reduces the computational complexity and is very robust to abnormal changes. Both qualitative and quantitative evaluations on challenging image sequences demonstrate that the proposed tracking algorithm achieves more favorable performance than several state-of-the-art methods.


Author(s):  
Jianglei Huang ◽  
Wengang Zhou

Target model update plays an important role in visual object tracking. However, performing optimal model update is challenging. In this work, we propose to achieve an optimal target model by learning a transformation matrix from the last target model to the newly generated one, which results into a minimization objective. In this objective, there exists two challenges. The first is that the newly generated target model is unreliable. To overcome this problem, we propose to impose a penalty to limit the distance between the learned target model and the last one. The second is that as time evolves, we can not decide whether the last target model has been corrupted or not. To get out of this dilemma, we propose a reinitialization term. Besides, to control the complexity of the transformation matrix, we also add a regularizer. We find that the optimization formula’s solution, with some simplifications, degenerates to EMA. Finally, despite the simplicity, extensive experiments conducted on several commonly used benchmarks demonstrate the effectiveness of our proposed approach in relatively long term scenarios.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-16
Author(s):  
Jinping Sun

The target and background will change continuously in the long-term tracking process, which brings great challenges to the accurate prediction of targets. The correlation filter algorithm based on manual features is difficult to meet the actual needs due to its limited feature representation ability. Thus, to improve the tracking performance and robustness, an improved hierarchical convolutional features model is proposed into a correlation filter framework for visual object tracking. First, the objective function is designed by lasso regression modeling, and a sparse, time-series low-rank filter is learned to increase the interpretability of the model. Second, the features of the last layer and the second pool layer of the convolutional neural network are extracted to realize the target position prediction from coarse to fine. In addition, using the filters learned from the first frame and the current frame to calculate the response maps, respectively, the target position is obtained by finding the maximum response value in the response map. The filter model is updated only when these two maximum responses meet the threshold condition. The proposed tracker is evaluated by simulation analysis on TC-128/OTB2015 benchmarks including more than 100 video sequences. Extensive experiments demonstrate that the proposed tracker achieves competitive performance against state-of-the-art trackers. The distance precision rate and overlap success rate of the proposed algorithm on OTB2015 are 0.829 and 0.695, respectively. The proposed algorithm effectively solves the long-term object tracking problem in complex scenes.


2017 ◽  
Vol 2017 ◽  
pp. 1-9 ◽  
Author(s):  
Suryo Adhi Wibowo ◽  
Hansoo Lee ◽  
Eun Kyeong Kim ◽  
Sungshin Kim

Histogram of oriented gradients (HOG) is a feature descriptor typically used for object detection. For object tracking, this feature has certain drawbacks when the target object is influenced by a change in motion or size. In this paper, the use of convolutional shallow features is proposed to improve the performance of HOG feature-based object tracking. Because the proposed method works based on a correlation filter, the response maps for each feature are summed in order to obtain the final response map. The location of the target object is then predicted based on the maximum value of the optimized final response map. Further, a model update is used to overcome the change in appearance of the target object during tracking. A performance evaluation of the proposed method is obtained by using Visual Object Tracking 2015 (VOT2015) benchmark dataset and its protocols. The results are then provided based on their accuracy-robustness (AR) rank. Furthermore, through a comparison with several state-of-the-art tracking algorithms, the proposed method was shown to achieve the highest rank in terms of accuracy and a third rank for robustness. In addition, the proposed method significantly improves the robustness of HOG-based features.


Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 3937 ◽  
Author(s):  
Yihong Zhang ◽  
Yijin Yang ◽  
Wuneng Zhou ◽  
Lifeng Shi ◽  
Demin Li

The discriminative correlation filters-based methods struggle deal with the problem of fast motion and heavy occlusion, the problem can severely degrade the performance of trackers, ultimately leading to tracking failures. In this paper, a novel Motion-Aware Correlation Filters (MACF) framework is proposed for online visual object tracking, where a motion-aware strategy based on joint instantaneous motion estimation Kalman filters is integrated into the Discriminative Correlation Filters (DCFs). The proposed motion-aware strategy is used to predict the possible region and scale of the target in the current frame by utilizing the previous estimated 3D motion information. Obviously, this strategy can prevent model drift caused by fast motion. On the base of the predicted region and scale, the MACF detects the position and scale of the target by using the DCFs-based method in the current frame. Furthermore, an adaptive model updating strategy is proposed to address the problem of corrupted models caused by occlusions, where the learning rate is determined by the confidence of the response map. The extensive experiments on popular Object Tracking Benchmark OTB-100, OTB-50 and unmanned aerial vehicles (UAV) video have demonstrated that the proposed MACF tracker performs better than most of the state-of-the-art trackers and achieves a high real-time performance. In addition, the proposed approach can be integrated easily and flexibly into other visual tracking algorithms.


Author(s):  
Tianyang Xu ◽  
Zhenhua Feng ◽  
Xiao-Jun Wu ◽  
Josef Kittler

AbstractDiscriminative Correlation Filters (DCF) have been shown to achieve impressive performance in visual object tracking. However, existing DCF-based trackers rely heavily on learning regularised appearance models from invariant image feature representations. To further improve the performance of DCF in accuracy and provide a parsimonious model from the attribute perspective, we propose to gauge the relevance of multi-channel features for the purpose of channel selection. This is achieved by assessing the information conveyed by the features of each channel as a group, using an adaptive group elastic net inducing independent sparsity and temporal smoothness on the DCF solution. The robustness and stability of the learned appearance model are significantly enhanced by the proposed method as the process of channel selection performs implicit spatial regularisation. We use the augmented Lagrangian method to optimise the discriminative filters efficiently. The experimental results obtained on a number of well-known benchmarking datasets demonstrate the effectiveness and stability of the proposed method. A superior performance over the state-of-the-art trackers is achieved using less than $$10\%$$ 10 % deep feature channels.


Sign in / Sign up

Export Citation Format

Share Document