Beyond view transformation: feature distribution consistent GANs for cross-view gait recognition

Author(s):  
Yu Wang ◽  
Yi Xia ◽  
Yongliang Zhang
Author(s):  
Yasushi Makihara ◽  
Ryusuke Sagawa ◽  
Yasuhiro Mukaigawa ◽  
Tomio Echigo ◽  
Yasushi Yagi

Author(s):  
MAODI HU ◽  
YUNHONG WANG ◽  
ZHAOXIANG ZHANG

Considering it is difficult to guarantee that at least one continuous complete gait cycle is captured in real applications, we address the multi-view gait recognition problem with short probe sequences. With unified multi-view population hidden markov models (umvpHMMs), the gait pattern is represented as fixed-length multi-view stances. By incorporating the multi-stance dynamics, the well-known view transformation model (VTM) is extended into a multi-linear projection model in a four-order tensor space, so that a view-independent stance-independent identity vector (VSIV) can be extracted. The main advantage is that the proposed VSIV is stable for each subject regardless of the camera location or the sequence length. Experiments show that our algorithm achieves encouraging performance for cross-view gait recognition even with short probe sequences.


Sign in / Sign up

Export Citation Format

Share Document