Robust visual tracking via self‐similarity learning

2017 ◽  
Vol 53 (1) ◽  
pp. 20-22 ◽  
Author(s):  
Huihui Song ◽  
Yuhui Zheng ◽  
Kaihua Zhang
IEEE Access ◽  
2018 ◽  
Vol 6 ◽  
pp. 68277-68287 ◽  
Author(s):  
Shiyan Wang ◽  
Yaoyao Wei ◽  
Ken Long ◽  
Xi Zeng ◽  
Min Zheng

2018 ◽  
Vol 37 (8) ◽  
pp. 1932-1942 ◽  
Author(s):  
Aurelien Bustin ◽  
Damien Voilliot ◽  
Anne Menini ◽  
Jacques Felblinger ◽  
Christian de Chillou ◽  
...  

Sensors ◽  
2020 ◽  
Vol 20 (7) ◽  
pp. 2137 ◽  
Author(s):  
Chenpu Li ◽  
Qianjian Xing ◽  
Zhenguo Ma

In the field of visual tracking, trackers based on a convolutional neural network (CNN) have had significant achievements. The fully-convolutional Siamese (SiamFC) tracker is a typical representation of these CNN trackers and has attracted much attention. It models visual tracking as a similarity-learning problem. However, experiments showed that SiamFC was not so robust in some complex environments. This may be because the tracker lacked enough prior information about the target. Inspired by the key idea of a Staple tracker and Kalman filter, we constructed two more models to help compensate for SiamFC’s disadvantages. One model contained the target’s prior color information, and the other the target’s prior trajectory information. With these two models, we design a novel and robust tracking framework on the basis of SiamFC. We call it Histogram–Kalman SiamFC (HKSiamFC). We also evaluated HKSiamFC tracker’s performance on dataset of the online object tracking benchmark (OTB) and Temple Color (TC128), and it showed quite competitive performance when compared with the baseline tracker and several other state-of-the-art trackers.


2019 ◽  
Vol 13 (7) ◽  
pp. 623-631
Author(s):  
Yufei Zha ◽  
Min Wu ◽  
Zhuling Qiu ◽  
Wangsheng Yu

2017 ◽  
Vol 2017 ◽  
pp. 1-8
Author(s):  
Liwei Chen ◽  
Tieshen Wang ◽  
Haifeng Zhu

Subpixel mapping (SPM) algorithms effectively estimate the spatial distribution of different land cover classes within mixed pixels. This paper proposed a new subpixel mapping method based on image structural self-similarity learning. Image structure self-similarity refers to similar structures within the same scale or different scales in image itself or its downsampled image, which widely exists in remote sensing images. Based on the similarity of image block structure, the proposed method estimates higher spatial distribution of coarse-resolution fraction images and realizes subpixel mapping. The experimental results show that the proposed method is more accurate than existing fast subpixel mapping algorithms.


Author(s):  
Kunpeng Li ◽  
Yu Kong ◽  
Yun Fu

Visual tracking has achieved remarkable success in recent decades, but it remains a challenging problem due to appearance variations over time and complex cluttered background. In this paper, we adopt a tracking-by-verification scheme to overcome these challenges by determining the patch in the subsequent frame that is most similar to the target template and distinctive to the background context. A multi-stream deep similarity learning network is proposed to learn the similarity comparison model. The loss function of our network encourages the distance between a positive patch in the search region and the target template to be smaller than that between positive patch and the background patches. Within the learned feature space, even if the distance between positive patches becomes large caused by the appearance change or interference of background clutter, our method can use the relative distance to distinguish the target robustly. Besides, the learned model is directly used for tracking with no need of model updating, parameter fine-tuning and can run at 45 fps on a single GPU. Our tracker achieves state-of-the-art performance on the visual tracking benchmark compared with other recent real-time-speed trackers, and shows better capability in handling background clutter, occlusion and appearance change.


2016 ◽  
Vol 364-365 ◽  
pp. 33-50 ◽  
Author(s):  
Sihua Yi ◽  
Nan Jiang ◽  
Bin Feng ◽  
Xinggang Wang ◽  
Wenyu Liu

2018 ◽  
Vol 28 (10) ◽  
pp. 2826-2835 ◽  
Author(s):  
Qingshan Liu ◽  
Jiaqing Fan ◽  
Huihui Song ◽  
Wei Chen ◽  
Kaihua Zhang

2020 ◽  
Vol 2020 ◽  
pp. 1-19
Author(s):  
Chenpu Li ◽  
Qianjian Xing ◽  
Zhenguo Ma ◽  
Ke Zang

With the development of deep learning, trackers based on convolutional neural networks (CNNs) have made significant achievements in visual tracking over the years. The fully connected Siamese network (SiamFC) is a typical representation of those trackers. SiamFC designs a two-branch architecture of a CNN and models’ visual tracking as a general similarity-learning problem. However, the feature maps it uses for visual tracking are only from the last layer of the CNN. Those features contain high-level semantic information but lack sufficiently detailed texture information. This means that the SiamFC tracker tends to drift when there are other same-category objects or when the contrast between the target and the background is very low. Focusing on addressing this problem, we design a novel tracking algorithm that combines a correlation filter tracker and the SiamFC tracker into one framework. In this framework, the correlation filter tracker can use the Histograms of Oriented Gradients (HOG) and color name (CN) features to guide the SiamFC tracker. This framework also contains an evaluation criterion which we design to evaluate the tracking result of the two trackers. If this criterion finds the SiamFC tracker fails in some cases, our framework will use the tracking result from the correlation filter tracker to correct the SiamFC. In this way, the defects of SiamFC’s high-level semantic features are remedied by the HOG and CN features. So, our algorithm provides a framework which combines two trackers together and makes them complement each other in visual tracking. And to the best of our knowledge, our algorithm is also the first one which designs an evaluation criterion using correlation filter and zero padding to evaluate the tracking result. Comprehensive experiments are conducted on the Online Tracking Benchmark (OTB), Temple Color (TC128), Benchmark for UAV Tracking (UAV-123), and Visual Object Tracking (VOT) Benchmark. The results show that our algorithm achieves quite a competitive performance when compared with the baseline tracker and several other state-of-the-art trackers.


Sign in / Sign up

Export Citation Format

Share Document