An Efficient Descriptor for Gait Recognition Using Spatio-Temporal Cues

Author(s):  
Sanjay Kumar Gupta ◽  
Gaurav Mahesh Sultaniya ◽  
Pratik Chattopadhyay
2021 ◽  
pp. 108453
Author(s):  
Huakang Li ◽  
Yidan Qiu ◽  
Huimin Zhao ◽  
Jin Zhan ◽  
Rongjun Chen ◽  
...  

2018 ◽  
Vol 10 (1) ◽  
pp. 29 ◽  
Author(s):  
Mohammad H. Ghaeminia ◽  
Shahriar B. Shokouhi

Author(s):  
Md. Zasim Uddin ◽  
Daigo Muramatsu ◽  
Noriko Takemura ◽  
Md. Atiqur Rahman Ahad ◽  
Yasushi Yagi

AbstractGait-based features provide the potential for a subject to be recognized even from a low-resolution image sequence, and they can be captured at a distance without the subject’s cooperation. Person recognition using gait-based features (gait recognition) is a promising real-life application. However, several body parts of the subjects are often occluded because of beams, pillars, cars and trees, or another walking person. Therefore, gait-based features are not applicable to approaches that require an unoccluded gait image sequence. Occlusion handling is a challenging but important issue for gait recognition. In this paper, we propose silhouette sequence reconstruction from an occluded sequence (sVideo) based on a conditional deep generative adversarial network (GAN). From the reconstructed sequence, we estimate the gait cycle and extract the gait features from a one gait cycle image sequence. To regularize the training of the proposed generative network, we use adversarial loss based on triplet hinge loss incorporating Wasserstein GAN (WGAN-hinge). To the best of our knowledge, WGAN-hinge is the first adversarial loss that supervises the generator network during training by incorporating pairwise similarity ranking information. The proposed approach was evaluated on multiple challenging occlusion patterns. The experimental results demonstrate that the proposed approach outperforms the existing state-of-the-art benchmarks.


2019 ◽  
Vol 79 (1-2) ◽  
pp. 713-736 ◽  
Author(s):  
Mohammad H. Ghaeminia ◽  
Shahriar B. Shokouhi ◽  
Ali Badiezadeh

2012 ◽  
Vol 25 (0) ◽  
pp. 43 ◽  
Author(s):  
Brenda Malcolm ◽  
Karen Reilly ◽  
Jérémie Mattout ◽  
Roméo Salemme ◽  
Olivier Bertrand ◽  
...  

Our ability to accurately discriminate information from one sensory modality is often influenced by information from the other senses. Previous research indicates that tactile perception on the hand may be enhanced if participants look at a hand (compared to a neutral object) and if visual information about the origin of touch conveys temporal and/or spatial congruency. The current experiment further assessed the effects of non-informative vision on tactile perception. Participants made speeded discrimination responses (digit 2 or digit 5 of their right hand) to supra-threshold electro-cutaneous stimulation while viewing a video showing a pointer, in a static position or moving (dynamic), towards the same or different digit of a hand or to the corresponding spatial location on a non-corporeal object (engine). Therefore, besides manipulating whether a visual contact was spatially congruent to the simultaneously felt touch, we also manipulated the nature of the recipient object (hand vs. engine). Behaviourally, the temporal cues provided by the dynamic visual information about an upcoming touch decreased reaction times. Additionally, a greater enhancement in tactile discrimination was present when participants viewed a spatially congruent contact compared to a spatially incongruent contact. Most importantly, this visually driven improvement was greater for the view-hand condition compared to the view-object condition. Spatially-congruent, hand-specific visual events also produced the greatest amplitude in the P50 somatosensory evoked potential (SEP). We conclude that tactile perception is enhanced when vision provides non-predictive spatio-temporal cues and that these effects are specifically enhanced when viewing a hand.


2013 ◽  
Author(s):  
Azhin Sabir ◽  
Naseer Al-jawad ◽  
Sabah Jassim

Sign in / Sign up

Export Citation Format

Share Document