Visual Tracking Using Multimodal Particle Filter

2018 ◽  
pp. 1072-1090 ◽  
Author(s):  
Tony Tung ◽  
Takashi Matsuyama

Visual tracking of humans or objects in motion is a challenging problem when observed data undergo appearance changes (e.g., due to illumination variations, occlusion, cluttered background, etc.). Moreover, tracking systems are usually initialized with predefined target templates, or trained beforehand using known datasets. Hence, they are not always efficient to detect and track objects whose appearance changes over time. In this paper, we propose a multimodal framework based on particle filtering for visual tracking of objects under challenging conditions (e.g., tracking various human body parts from multiple views). Particularly, the authors integrate various cues such as color, motion and depth in a global formulation. The Earth Mover distance is used to compare color models in a global fashion, and constraints on motion flow features prevent common drifting effects due to error propagation. In addition, the model features an online mechanism that adaptively updates a subspace of multimodal templates to cope with appearance changes. Furthermore, the proposed model is integrated in a practical detection and tracking process, and multiple instances can run in real-time. Experimental results are obtained on challenging real-world videos with poorly textured models and arbitrary non-linear motions.

2014 ◽  
Vol 4 (3) ◽  
pp. 69-84 ◽  
Author(s):  
Tony Tung ◽  
Takashi Matsuyama

Visual tracking of humans or objects in motion is a challenging problem when observed data undergo appearance changes (e.g., due to illumination variations, occlusion, cluttered background, etc.). Moreover, tracking systems are usually initialized with predefined target templates, or trained beforehand using known datasets. Hence, they are not always efficient to detect and track objects whose appearance changes over time. In this paper, we propose a multimodal framework based on particle filtering for visual tracking of objects under challenging conditions (e.g., tracking various human body parts from multiple views). Particularly, the authors integrate various cues such as color, motion and depth in a global formulation. The Earth Mover distance is used to compare color models in a global fashion, and constraints on motion flow features prevent common drifting effects due to error propagation. In addition, the model features an online mechanism that adaptively updates a subspace of multimodal templates to cope with appearance changes. Furthermore, the proposed model is integrated in a practical detection and tracking process, and multiple instances can run in real-time. Experimental results are obtained on challenging real-world videos with poorly textured models and arbitrary non-linear motions.


Author(s):  
Tony Tung ◽  
Takashi Matsuyama

This chapter presents a new formulation for the problem of human motion tracking in video. Tracking is still a challenging problem when strong appearance changes occur as in videos of humans in motion. Most trackers rely on a predefined template or on a training dataset to achieve detection and tracking. Therefore they are not efficient to track objects whose appearance is not known in advance. A solution is to use an online method that updates iteratively a subspace of reference target models. In addition, we propose to integrate color and motion cues in a particle filter framework to track human body parts. The algorithm process consists of two modes, switching between detection and tracking. The detection steps involve trained classifiers to update estimated positions of the tracking windows, whereas tracking steps rely on an adaptive color-based particle filter coupled with optical flow estimations. The Earth Mover distance is used to compare color models in a global fashion, and constraints on flow features avoid drifting effects. The proposed method has revealed its efficiency to track body parts in motion and can cope with full appearance changes. Experiments were performed on challenging real world videos with poorly textured models and non-linear motions.


2018 ◽  
Vol 60 ◽  
pp. 183-192 ◽  
Author(s):  
Xiaoyan Qian ◽  
Lei Han ◽  
Yuedong Wang ◽  
Meng Ding

2019 ◽  
Vol 16 (8) ◽  
pp. 3571-3575 ◽  
Author(s):  
B. Ankayarkanni ◽  
J. Albert Mayan ◽  
J. Aruna

Protest following is the route toward confining the moving object from the video courses of action. Following is essentially a planning issue being referred to following. There exist innumerable after strategies with revealing accomplishment. Nevertheless, the testing issue in visual after is to deal with the appearance changes of target question in perspective of its flexible limit. Difficulties in consequent things can appear due to non-inflexible development, quick advancement, broad assortment of stance and scale, obstacle and buoys etc. One of the standard reason behind such frustrations is that, the unsuccessful picture portrayal designs of various figuring. In this paper, we show a visual after methodology using Support Vector Machine (SVM) close by various piece computation. The Expectation Maximization estimation relies upon the check and score regard and empower portrayal using SVM. In protest following, select which question is taken after picking by customer. By then that challenge parts are evacuated. In perspective of the cutting edge and establishment stamp is set. By then this stamp and segments are given to the SVM getting ready. We moreover consolidate an update intend to speak to address appearance change. Our tracker can manage hindrance and performs against existing visual after figuring in dealing with various conditions. Unique and quantitative evaluations over various testing show the engaged implementation of our following figuring. The spine is question following and strange state affirmation is proposed for single dissent following, in which the target characterization is viably seen in the midst of following.


2011 ◽  
Vol 2011 ◽  
pp. 1-11 ◽  
Author(s):  
P. L. M. Bouttefroy ◽  
A. Bouzerdoum ◽  
S. L. Phung ◽  
A. Beghdadi

Sign in / Sign up

Export Citation Format

Share Document