Robust Visual Tracking Using Particle Filtering on SL(3) Group

2013 ◽  
Vol 457-458 ◽  
pp. 1028-1031
Author(s):  
Ying Hong Xie ◽  
Cheng Dong Wu

Considering the process of objects imaging in the camera is essentially the projection transformation process. The paper proposes a novel visual tracking method using particle filtering on SL(3) group to predict the changes of the target area boundaries of next moment, which is used for dynamic model. Meanwhile, covariance matrices are applied for observation model. Extensive experiments prove that the proposed method can realize stable and accurate tracking for object with significant geometric deformation, even for nonrigid objects.

2018 ◽  
Vol 14 (3) ◽  
pp. 155014771876685
Author(s):  
Yinghong Xie ◽  
Xiaosheng Yu ◽  
Chengdong Wu

Visual object tracking methods based on wireless multimedia sensor network is one of the research hotspots while the present linear method for processing feature vectors often lead to the tracking drift when tracking object with significant nonplanar pose variations through wireless sensor networks. In this article, we propose a novel nonlinear algorithm for tracking significant deformable objects. The proposed tracking scheme has two filters. On one hand, considering that Grassmann manifold is one of entropy manifold in Lie group manifold, which can describe and process the data of appearance feature more accurately, one filter is designed on it, to estimate the object appearance, by making full use of the transformation relationship between the point on manifold and its corresponding point on tangent space. On the other hand, considering that the process of objects imaging is essentially projection transformation process, the other filter is designed on projection transformation (SL(3)) group, describing the geometric deformation of the objects. The two filters execute alternatively to mitigate tracking drift. Extensive experiments prove that the proposed method can realize stable and accurate tracking for targets with significant geometric deformation, even obscured and illumination changes.


2018 ◽  
pp. 1072-1090 ◽  
Author(s):  
Tony Tung ◽  
Takashi Matsuyama

Visual tracking of humans or objects in motion is a challenging problem when observed data undergo appearance changes (e.g., due to illumination variations, occlusion, cluttered background, etc.). Moreover, tracking systems are usually initialized with predefined target templates, or trained beforehand using known datasets. Hence, they are not always efficient to detect and track objects whose appearance changes over time. In this paper, we propose a multimodal framework based on particle filtering for visual tracking of objects under challenging conditions (e.g., tracking various human body parts from multiple views). Particularly, the authors integrate various cues such as color, motion and depth in a global formulation. The Earth Mover distance is used to compare color models in a global fashion, and constraints on motion flow features prevent common drifting effects due to error propagation. In addition, the model features an online mechanism that adaptively updates a subspace of multimodal templates to cope with appearance changes. Furthermore, the proposed model is integrated in a practical detection and tracking process, and multiple instances can run in real-time. Experimental results are obtained on challenging real-world videos with poorly textured models and arbitrary non-linear motions.


2014 ◽  
Vol 4 (3) ◽  
pp. 69-84 ◽  
Author(s):  
Tony Tung ◽  
Takashi Matsuyama

Visual tracking of humans or objects in motion is a challenging problem when observed data undergo appearance changes (e.g., due to illumination variations, occlusion, cluttered background, etc.). Moreover, tracking systems are usually initialized with predefined target templates, or trained beforehand using known datasets. Hence, they are not always efficient to detect and track objects whose appearance changes over time. In this paper, we propose a multimodal framework based on particle filtering for visual tracking of objects under challenging conditions (e.g., tracking various human body parts from multiple views). Particularly, the authors integrate various cues such as color, motion and depth in a global formulation. The Earth Mover distance is used to compare color models in a global fashion, and constraints on motion flow features prevent common drifting effects due to error propagation. In addition, the model features an online mechanism that adaptively updates a subspace of multimodal templates to cope with appearance changes. Furthermore, the proposed model is integrated in a practical detection and tracking process, and multiple instances can run in real-time. Experimental results are obtained on challenging real-world videos with poorly textured models and arbitrary non-linear motions.


2018 ◽  
Vol 60 ◽  
pp. 183-192 ◽  
Author(s):  
Xiaoyan Qian ◽  
Lei Han ◽  
Yuedong Wang ◽  
Meng Ding

2016 ◽  
Vol 177 ◽  
pp. 612-619 ◽  
Author(s):  
Ming-Liang Gao ◽  
Jin Shen ◽  
Li-Ju Yin ◽  
Wei Liu ◽  
Guo-Feng Zou ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document