scholarly journals Online Hierarchical Sparse Representation of Multifeature for Robust Object Tracking

2016 ◽  
Vol 2016 ◽  
pp. 1-13
Author(s):  
Honghong Yang ◽  
Shiru Qu

Object tracking based on sparse representation has given promising tracking results in recent years. However, the trackers under the framework of sparse representation always overemphasize the sparse representation and ignore the correlation of visual information. In addition, the sparse coding methods only encode the local region independently and ignore the spatial neighborhood information of the image. In this paper, we propose a robust tracking algorithm. Firstly, multiple complementary features are used to describe the object appearance; the appearance model of the tracked target is modeled by instantaneous and stable appearance features simultaneously. A two-stage sparse-coded method which takes the spatial neighborhood information of the image patch and the computation burden into consideration is used to compute the reconstructed object appearance. Then, the reliability of each tracker is measured by the tracking likelihood function of transient and reconstructed appearance models. Finally, the most reliable tracker is obtained by a well established particle filter framework; the training set and the template library are incrementally updated based on the current tracking results. Experiment results on different challenging video sequences show that the proposed algorithm performs well with superior tracking accuracy and robustness.

2014 ◽  
Vol 2014 ◽  
pp. 1-11 ◽  
Author(s):  
Jiang-tao Wang ◽  
De-bao Chen ◽  
Jing-ai Zhang ◽  
Su-wen Li ◽  
Xing-jun Wang

Generally, subspace learning based methods such as the Incremental Visual Tracker (IVT) have been shown to be quite effective for visual tracking problem. However, it may fail to follow the target when it undergoes drastic pose or illumination changes. In this work, we present a novel tracker to enhance the IVT algorithm by employing a multicue based adaptive appearance model. First, we carry out the integration of cues both in feature space and in geometric space. Second, the integration directly depends on the dynamically-changing reliabilities of visual cues. These two aspects of our method allow the tracker to easily adapt itself to the changes in the context and accordingly improve the tracking accuracy by resolving the ambiguities. Experimental results demonstrate that subspace-based tracking is strongly improved by exploiting the multiple cues through the proposed algorithm.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1129 ◽  
Author(s):  
Jianming Zhang ◽  
Yang Liu ◽  
Hehua Liu ◽  
Jin Wang

Visual object tracking is a significant technology for camera-based sensor networks applications. Multilayer convolutional features comprehensively used in correlation filter (CF)-based tracking algorithms have achieved excellent performance. However, there are tracking failures in some challenging situations because ordinary features are not able to well represent the object appearance variations and the correlation filters are updated irrationally. In this paper, we propose a local–global multiple correlation filters (LGCF) tracking algorithm for edge computing systems capturing moving targets, such as vehicles and pedestrians. First, we construct a global correlation filter model with deep convolutional features, and choose horizontal or vertical division according to the aspect ratio to build two local filters with hand-crafted features. Then, we propose a local–global collaborative strategy to exchange information between local and global correlation filters. This strategy can avoid the wrong learning of the object appearance model. Finally, we propose a time-space peak to sidelobe ratio (TSPSR) to evaluate the stability of the current CF. When the estimated results of the current CF are not reliable, the Kalman filter redetection (KFR) model would be enabled to recapture the object. The experimental results show that our presented algorithm achieves better performances on OTB-2013 and OTB-2015 compared with the other latest 12 tracking algorithms. Moreover, our algorithm handles various challenges in object tracking well.


Algorithms ◽  
2018 ◽  
Vol 11 (8) ◽  
pp. 126
Author(s):  
Zhiguo Song ◽  
Jifeng Sun ◽  
Jialin Yu ◽  
Shengqing Liu

Appearance models play an important role in visual tracking. Effective modeling of the appearance of tracked objects is still a challenging problem because of object appearance changes caused by factors, such as partial occlusion, illumination variation and deformation, etc. In this paper, we propose a tracking method based on the patch descriptor and the structural local sparse representation. In our method, the object is firstly divided into multiple non-overlapped patches, and the patch sparse coefficients are obtained by structural local sparse representation. Secondly, each patch is further decomposed into several sub-patches. The patch descriptors are defined as the proportion of sub-patches, of which the reconstruction error is less than the given threshold. Finally, the appearance of an object is modeled by the patch descriptors and the patch sparse coefficients. Furthermore, in order to adapt to appearance changes of an object and alleviate the model drift, an outlier-aware template update scheme is introduced. Experimental results on a large benchmark dataset demonstrate the effectiveness of the proposed method.


2014 ◽  
Vol 1044-1045 ◽  
pp. 1302-1308
Author(s):  
Shao Mei Li ◽  
Kai Wang ◽  
Chao Gao ◽  
Ya Wen Wang

To improves tracking drift which often occurs in adaptive tracking, an algorithm based on the fusion of tracking and detection is proposed in this paper. Firstly, tracking object frame by frame via color histogram and particle filtering. Secondly, reversely validating the tracking result based on particle filtering. Finally, relocating the object based on SIFT features matching and voting when drift occurs. Object appearance model is updated at the same time. The algorithm can not only sense tracking drift but also relocate the object whenever needed. Experimental results demonstrate that this algorithm outperforms state-of-the-art algorithms on many challenging sequences.


2019 ◽  
Vol 2019 (1) ◽  
pp. 320-325 ◽  
Author(s):  
Wenyu Bao ◽  
Minchen Wei

Great efforts have been made to develop color appearance models to predict color appearance of stimuli under various viewing conditions. CIECAM02, the most widely used color appearance model, and many other color appearance models were all developed based on corresponding color datasets, including LUTCHI data. Though the effect of adapting light level on color appearance, which is known as "Hunt Effect", is well known, most of the corresponding color datasets were collected within a limited range of light levels (i.e., below 700 cd/m2), which was much lower than that under daylight. A recent study investigating color preference of an artwork under various light levels from 20 to 15000 lx suggested that the existing color appearance models may not accurately characterize the color appearance of stimuli under extremely high light levels, based on the assumption that the same preference judgements were due to the same color appearance. This article reports a psychophysical study, which was designed to directly collect corresponding colors under two light levels— 100 and 3000 cd/m2 (i.e., ≈ 314 and 9420 lx). Human observers completed haploscopic color matching for four color stimuli (i.e., red, green, blue, and yellow) under the two light levels at 2700 or 6500 K. Though the Hunt Effect was supported by the results, CIECAM02 was found to have large errors under the extremely high light levels, especially when the CCT was low.


Symmetry ◽  
2021 ◽  
Vol 13 (2) ◽  
pp. 266 ◽  
Author(s):  
Yifeng Wang ◽  
Zhijiang Zhang ◽  
Ning Zhang ◽  
Dan Zeng

The one-shot multiple object tracking (MOT) framework has drawn more and more attention in the MOT research community due to its advantage in inference speed. However, the tracking accuracy of current one-shot approaches could lead to an inferior performance compared with their two-stage counterparts. The reasons are two-fold: one is that motion information is often neglected due to the single-image input. The other is that detection and re-identification (ReID) are two different tasks with different focuses. Joining detection and re-identification at the training stage could lead to a suboptimal performance. To alleviate the above limitations, we propose a one-shot network named Motion and Correlation-Multiple Object Tracking (MAC-MOT). MAC-MOT introduces a motion enhance attention module (MEA) and a dual correlation attention module (DCA). MEA performs differences on adjacent feature maps which enhances the motion-related features while suppressing irrelevant information. The DCA module focuses on decoupling the detection task and re-identification task to strike a balance and reduce the competition between these two tasks. Moreover, symmetry is a core design idea in our proposed framework which is reflected in Siamese-based deep learning backbone networks, the input of dual stream images, as well as a dual correlation attention module. Our proposed approach is evaluated on the popular multiple object tracking benchmarks MOT16 and MOT17. We demonstrate that the proposed MAC-MOT can achieve a better performance than the baseline state of the arts (SOTAs).


Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 2894
Author(s):  
Minh-Quan Dao ◽  
Vincent Frémont

Multi-Object Tracking (MOT) is an integral part of any autonomous driving pipelines because it produces trajectories of other moving objects in the scene and predicts their future motion. Thanks to the recent advances in 3D object detection enabled by deep learning, track-by-detection has become the dominant paradigm in 3D MOT. In this paradigm, a MOT system is essentially made of an object detector and a data association algorithm which establishes track-to-detection correspondence. While 3D object detection has been actively researched, association algorithms for 3D MOT has settled at bipartite matching formulated as a Linear Assignment Problem (LAP) and solved by the Hungarian algorithm. In this paper, we adapt a two-stage data association method which was successfully applied to image-based tracking to the 3D setting, thus providing an alternative for data association for 3D MOT. Our method outperforms the baseline using one-stage bipartite matching for data association by achieving 0.587 Average Multi-Object Tracking Accuracy (AMOTA) in NuScenes validation set and 0.365 AMOTA (at level 2) in Waymo test set.


Author(s):  
Hongyang Yu ◽  
Guorong Li ◽  
Weigang Zhang ◽  
Hongxun Yao ◽  
Qingming Huang

Author(s):  
Xiuhua Hu ◽  
Yuan Chen ◽  
Yan Hui ◽  
Yingyu Liang ◽  
Guiping Li ◽  
...  

Aiming to tackle the problem of tracking drift easily caused by complex factors during the tracking process, this paper proposes an improved object tracking method under the framework of kernel correlation filter. To achieve discriminative information that is not sensitive to object appearance change, it combines dimensionality-reduced Histogram of Oriented Gradients features and Lab color features, which can be used to exploit the complementary characteristics robustly. Based on the idea of multi-resolution pyramid theory, a multi-scale model of the object is constructed, and the optimal scale for tracking the object is found according to the confidence maps’ response peaks of different sizes. For the case that tracking failure can easily occur when there exists inappropriate updating in the model, it detects occlusion based on whether the occlusion rate of the response peak corresponding to the best object state is less than a set threshold. At the same time, Kalman filter is used to record the motion feature information of the object before occlusion, and predict the state of the object disturbed by occlusion, which can achieve robust tracking of the object affected by occlusion influence. Experimental results show the effectiveness of the proposed method in handling various internal and external interferences under challenging environments.


Sign in / Sign up

Export Citation Format

Share Document