scholarly journals Adaptive MCMC-Based Particle Filter for Real-Time Multi-Face Tracking on Mobile Platforms

2014 ◽  
Vol 10 (3) ◽  
pp. 17-25
Author(s):  
In Seop Na ◽  
Ha Le ◽  
Soo Hyung Kim



Author(s):  
Juan José Pantrigo ◽  
Antonio S. Montemayor ◽  
Raúl Cabido


2013 ◽  
Vol 06 (05) ◽  
pp. 1-5 ◽  
Author(s):  
Wei-Ming Chen ◽  
Yi-Lung Lin ◽  
Ya-Hsiung Hsieh


2014 ◽  
Vol 651-653 ◽  
pp. 2306-2309
Author(s):  
Dong He Yang

In view of the traditional particle filter algorithm cannot guarantee effective tracking in the case of target rotation or obscured. The study proposes a tracking method based on α-β-γ filter and particle filter. The algorithm uses α-β-γ filtering prediction position as the next frame image target candidate model of computing center of particle filter algorithm. The algorithm uses α-β-γ filtering prediction position as the next frame image target candidate model of computing center of particle filter. To reduce the number of iterations of particle filter algorithm, strengthen the real-time tracking of moving face. When detect the face is obscured, with α-β-γ filter prediction point as facial movement position, so as to realize the continuity of the movement. The experimental results show that the proposed algorithm improves the traditional particle filter for real-time face tracking, enhancing the ability of anti-occlusion.





Author(s):  
HyeonJung Park ◽  
Youngki Lee ◽  
JeongGil Ko

In this work we present SUGO, a depth video-based system for translating sign language to text using a smartphone's front camera. While exploiting depth-only videos offer benefits such as being less privacy-invasive compared to using RGB videos, it introduces new challenges which include dealing with low video resolutions and the sensors' sensitiveness towards user motion. We overcome these challenges by diversifying our sign language video dataset to be robust to various usage scenarios via data augmentation and design a set of schemes to emphasize human gestures from the input images for effective sign detection. The inference engine of SUGO is based on a 3-dimensional convolutional neural network (3DCNN) to classify a sequence of video frames as a pre-trained word. Furthermore, the overall operations are designed to be light-weight so that sign language translation takes place in real-time using only the resources available on a smartphone, with no help from cloud servers nor external sensing components. Specifically, to train and test SUGO, we collect sign language data from 20 individuals for 50 Korean Sign Language words, summing up to a dataset of ~5,000 sign gestures and collect additional in-the-wild data to evaluate the performance of SUGO in real-world usage scenarios with different lighting conditions and daily activities. Comprehensively, our extensive evaluations show that SUGO can properly classify sign words with an accuracy of up to 91% and also suggest that the system is suitable (in terms of resource usage, latency, and environmental robustness) to enable a fully mobile solution for sign language translation.



Sign in / Sign up

Export Citation Format

Share Document