Deep learning assisted visual tracking of evader-UAV

Author(s):  
Athanasios Tsoukalas ◽  
Daitao Xing ◽  
Nikolaos Evangeliou ◽  
Nikolaos Giakoumidis ◽  
Anthony Tzes
2018 ◽  
Vol 60 ◽  
pp. 183-192 ◽  
Author(s):  
Xiaoyan Qian ◽  
Lei Han ◽  
Yuedong Wang ◽  
Meng Ding

IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 118519-118529
Author(s):  
Jing Xin ◽  
Xing Du ◽  
Yaqian Shi

2018 ◽  
Vol 2018 ◽  
pp. 1-17 ◽  
Author(s):  
Rui Zhang ◽  
Zhaokui Wang ◽  
Yulin Zhang

Real-time astronaut visual tracking is the most important prerequisite for flying assistant robot to follow and assist the served astronaut in the space station. In this paper, an astronaut visual tracking algorithm which is based on deep learning and probabilistic model is proposed. Fine-tuned with feature extraction layers’ parameters being initialized by ready-made model, an improved SSD (Single Shot Multibox Detector) network was proposed for robust astronaut detection in color image. Associating the detection results with synchronized depth image measured by RGB-D camera, a probabilistic model is presented to ensure accurate and consecutive tracking of the certain served astronaut. The algorithm runs 10 fps at Jetson TX2, and it was extensively validated by several datasets which contain most instances of astronaut activities. The experimental results indicate that our proposed algorithm achieves not only robust tracking of the specified person with diverse postures or dressings but also effective occlusion detection for avoiding mistaken tracking.


Author(s):  
C. Xiao ◽  
A. Yilmaz ◽  
S. Lia

Despite having achieved good performance, visual tracking is still an open area of research, especially when target undergoes serious appearance changes which are not included in the model. So, in this paper, we replace the appearance model by a concept model which is learned from large-scale datasets using a deep learning network. The concept model is a combination of high-level semantic information that is learned from myriads of objects with various appearances. In our tracking method, we generate the target’s concept by combining the learned object concepts from classification task. We also demonstrate that the last convolutional feature map can be used to generate a heat map to highlight the possible location of the given target in new frames. Finally, in the proposed tracking framework, we utilize the target image, the search image cropped from the new frame and their heat maps as input into a localization network to find the final target position. Compared to the other state-of-the-art trackers, the proposed method shows the comparable and at times better performance in real-time.


2016 ◽  
Vol 175 ◽  
pp. 310-323 ◽  
Author(s):  
Guoxing Wu ◽  
Wenjie Lu ◽  
Guangwei Gao ◽  
Chunxia Zhao ◽  
Jiayin Liu

Sign in / Sign up

Export Citation Format

Share Document