Deep learning assisted robust visual tracking with adaptive particle filtering

2018 ◽  
Vol 60 ◽  
pp. 183-192 ◽  
Author(s):  
Xiaoyan Qian ◽  
Lei Han ◽  
Yuedong Wang ◽  
Meng Ding
Author(s):  
Athanasios Tsoukalas ◽  
Daitao Xing ◽  
Nikolaos Evangeliou ◽  
Nikolaos Giakoumidis ◽  
Anthony Tzes

2018 ◽  
pp. 1072-1090 ◽  
Author(s):  
Tony Tung ◽  
Takashi Matsuyama

Visual tracking of humans or objects in motion is a challenging problem when observed data undergo appearance changes (e.g., due to illumination variations, occlusion, cluttered background, etc.). Moreover, tracking systems are usually initialized with predefined target templates, or trained beforehand using known datasets. Hence, they are not always efficient to detect and track objects whose appearance changes over time. In this paper, we propose a multimodal framework based on particle filtering for visual tracking of objects under challenging conditions (e.g., tracking various human body parts from multiple views). Particularly, the authors integrate various cues such as color, motion and depth in a global formulation. The Earth Mover distance is used to compare color models in a global fashion, and constraints on motion flow features prevent common drifting effects due to error propagation. In addition, the model features an online mechanism that adaptively updates a subspace of multimodal templates to cope with appearance changes. Furthermore, the proposed model is integrated in a practical detection and tracking process, and multiple instances can run in real-time. Experimental results are obtained on challenging real-world videos with poorly textured models and arbitrary non-linear motions.


2014 ◽  
Vol 4 (3) ◽  
pp. 69-84 ◽  
Author(s):  
Tony Tung ◽  
Takashi Matsuyama

Visual tracking of humans or objects in motion is a challenging problem when observed data undergo appearance changes (e.g., due to illumination variations, occlusion, cluttered background, etc.). Moreover, tracking systems are usually initialized with predefined target templates, or trained beforehand using known datasets. Hence, they are not always efficient to detect and track objects whose appearance changes over time. In this paper, we propose a multimodal framework based on particle filtering for visual tracking of objects under challenging conditions (e.g., tracking various human body parts from multiple views). Particularly, the authors integrate various cues such as color, motion and depth in a global formulation. The Earth Mover distance is used to compare color models in a global fashion, and constraints on motion flow features prevent common drifting effects due to error propagation. In addition, the model features an online mechanism that adaptively updates a subspace of multimodal templates to cope with appearance changes. Furthermore, the proposed model is integrated in a practical detection and tracking process, and multiple instances can run in real-time. Experimental results are obtained on challenging real-world videos with poorly textured models and arbitrary non-linear motions.


IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 118519-118529
Author(s):  
Jing Xin ◽  
Xing Du ◽  
Yaqian Shi

2011 ◽  
Vol 2011 ◽  
pp. 1-11 ◽  
Author(s):  
P. L. M. Bouttefroy ◽  
A. Bouzerdoum ◽  
S. L. Phung ◽  
A. Beghdadi

2013 ◽  
Vol 457-458 ◽  
pp. 1028-1031
Author(s):  
Ying Hong Xie ◽  
Cheng Dong Wu

Considering the process of objects imaging in the camera is essentially the projection transformation process. The paper proposes a novel visual tracking method using particle filtering on SL(3) group to predict the changes of the target area boundaries of next moment, which is used for dynamic model. Meanwhile, covariance matrices are applied for observation model. Extensive experiments prove that the proposed method can realize stable and accurate tracking for object with significant geometric deformation, even for nonrigid objects.


2018 ◽  
Vol 2018 ◽  
pp. 1-17 ◽  
Author(s):  
Rui Zhang ◽  
Zhaokui Wang ◽  
Yulin Zhang

Real-time astronaut visual tracking is the most important prerequisite for flying assistant robot to follow and assist the served astronaut in the space station. In this paper, an astronaut visual tracking algorithm which is based on deep learning and probabilistic model is proposed. Fine-tuned with feature extraction layers’ parameters being initialized by ready-made model, an improved SSD (Single Shot Multibox Detector) network was proposed for robust astronaut detection in color image. Associating the detection results with synchronized depth image measured by RGB-D camera, a probabilistic model is presented to ensure accurate and consecutive tracking of the certain served astronaut. The algorithm runs 10 fps at Jetson TX2, and it was extensively validated by several datasets which contain most instances of astronaut activities. The experimental results indicate that our proposed algorithm achieves not only robust tracking of the specified person with diverse postures or dressings but also effective occlusion detection for avoiding mistaken tracking.


Sign in / Sign up

Export Citation Format

Share Document