scholarly journals Classifier Adaptive Fusion: Deep Learning for Robust Outdoor Vehicle Visual Tracking

IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 118519-118529
Author(s):  
Jing Xin ◽  
Xing Du ◽  
Yaqian Shi
Author(s):  
Athanasios Tsoukalas ◽  
Daitao Xing ◽  
Nikolaos Evangeliou ◽  
Nikolaos Giakoumidis ◽  
Anthony Tzes

2018 ◽  
Vol 60 ◽  
pp. 183-192 ◽  
Author(s):  
Xiaoyan Qian ◽  
Lei Han ◽  
Yuedong Wang ◽  
Meng Ding

2018 ◽  
Vol 2018 ◽  
pp. 1-17 ◽  
Author(s):  
Rui Zhang ◽  
Zhaokui Wang ◽  
Yulin Zhang

Real-time astronaut visual tracking is the most important prerequisite for flying assistant robot to follow and assist the served astronaut in the space station. In this paper, an astronaut visual tracking algorithm which is based on deep learning and probabilistic model is proposed. Fine-tuned with feature extraction layers’ parameters being initialized by ready-made model, an improved SSD (Single Shot Multibox Detector) network was proposed for robust astronaut detection in color image. Associating the detection results with synchronized depth image measured by RGB-D camera, a probabilistic model is presented to ensure accurate and consecutive tracking of the certain served astronaut. The algorithm runs 10 fps at Jetson TX2, and it was extensively validated by several datasets which contain most instances of astronaut activities. The experimental results indicate that our proposed algorithm achieves not only robust tracking of the specified person with diverse postures or dressings but also effective occlusion detection for avoiding mistaken tracking.


Author(s):  
C. Xiao ◽  
A. Yilmaz ◽  
S. Lia

Despite having achieved good performance, visual tracking is still an open area of research, especially when target undergoes serious appearance changes which are not included in the model. So, in this paper, we replace the appearance model by a concept model which is learned from large-scale datasets using a deep learning network. The concept model is a combination of high-level semantic information that is learned from myriads of objects with various appearances. In our tracking method, we generate the target’s concept by combining the learned object concepts from classification task. We also demonstrate that the last convolutional feature map can be used to generate a heat map to highlight the possible location of the given target in new frames. Finally, in the proposed tracking framework, we utilize the target image, the search image cropped from the new frame and their heat maps as input into a localization network to find the final target position. Compared to the other state-of-the-art trackers, the proposed method shows the comparable and at times better performance in real-time.


2016 ◽  
Vol 175 ◽  
pp. 310-323 ◽  
Author(s):  
Guoxing Wu ◽  
Wenjie Lu ◽  
Guangwei Gao ◽  
Chunxia Zhao ◽  
Jiayin Liu

2019 ◽  
Vol 8 (1) ◽  
pp. 28 ◽  
Author(s):  
Quanlong Feng ◽  
Dehai Zhu ◽  
Jianyu Yang ◽  
Baoguo Li

Accurate urban land-use mapping is a challenging task in the remote-sensing field. With the availability of diverse remote sensors, synthetic use and integration of multisource data provides an opportunity for improving urban land-use classification accuracy. Neural networks for Deep Learning have achieved very promising results in computer-vision tasks, such as image classification and object detection. However, the problem of designing an effective deep-learning model for the fusion of multisource remote-sensing data still remains. To tackle this issue, this paper proposes a modified two-branch convolutional neural network for the adaptive fusion of hyperspectral imagery (HSI) and Light Detection and Ranging (LiDAR) data. Specifically, the proposed model consists of a HSI branch and a LiDAR branch, sharing the same network structure to reduce the time cost of network design. A residual block is utilized in each branch to extract hierarchical, parallel, and multiscale features. An adaptive-feature fusion module is proposed to integrate HSI and LiDAR features in a more reasonable and natural way (based on "Squeeze-and-Excitation Networks"). Experiments indicate that the proposed two-branch network shows good performance, with an overall accuracy of almost 92%. Compared with single-source data, the introduction of multisource data improves accuracy by at least 8%. The adaptive fusion model can also increase classification accuracy by more than 3% when compared with the feature-stacking method (simple concatenation). The results demonstrate that the proposed network can effectively extract and fuse features for a better urban land-use mapping accuracy.


Sign in / Sign up

Export Citation Format

Share Document