Gait Feature Fusion using Factorial HMM

Author(s):  
Jimin Liang ◽  
Changhong Chen ◽  
Heng Zhao ◽  
Haihong Hu ◽  
Jie Tian

Multisource information fusion technology offers a promising solution to the development of a superior classification system. For gait recognition problem, information fusion is necessary to be employed under at least three circumstances: 1) multiple gait feature fusion, 2) multiple view gait sequence fusion, and 3) gait and other biometrics fusion. Feature concatenation is the most popular methodology to integrate multiple features. However, because of the high dimensional gait data size and small available number of training samples, feature concatenation typically leads to the well-known curse of dimensionality and the small sample size problems. In this chapter, we explore the factorial hidden Markov model (FHMM), an extended hidden Markov model (HMM) with a multiple layer structure, as a feature fusion framework for gait recognition. FHMM provides an alternative to combining several gait features without concatenating them into a single augmented feature, thus, to some extent, overcomes the curse of dimensionality and small sample size problem for gait recognition. Three gait features, the frieze feature, wavelet feature, and boundary signature, are adopted in the numerical experiments conducted on CMU MoBo database and CASIA gait database A. Besides the cumulative matching score (CMS) curves, McNemar’s test is employed to check on the statistical significance of the performance difference between the recognition algorithms. Experimental results demonstrate that the proposed FHMM feature fusion scheme outperforms the feature concatenation method.

2019 ◽  
Vol 56 (11) ◽  
pp. 111001
Author(s):  
贺琪 Qi He ◽  
李瑶 Yao Li ◽  
宋巍 Wei Song ◽  
黄冬梅 Dongmei Huang ◽  
何盛琪 Shengqi He ◽  
...  

2014 ◽  
Vol 2014 ◽  
pp. 1-7 ◽  
Author(s):  
Md. Rabiul Islam ◽  
Md. Abdus Sobhan

The aim of the paper is to propose a feature fusion based Audio-Visual Speaker Identification (AVSI) system with varied conditions of illumination environments. Among the different fusion strategies, feature level fusion has been used for the proposed AVSI system where Hidden Markov Model (HMM) is used for learning and classification. Since the feature set contains richer information about the raw biometric data than any other levels, integration at feature level is expected to provide better authentication results. In this paper, both Mel Frequency Cepstral Coefficients (MFCCs) and Linear Prediction Cepstral Coefficients (LPCCs) are combined to get the audio feature vectors and Active Shape Model (ASM) based appearance and shape facial features are concatenated to take the visual feature vectors. These combined audio and visual features are used for the feature-fusion. To reduce the dimension of the audio and visual feature vectors, Principal Component Analysis (PCA) method is used. The VALID audio-visual database is used to measure the performance of the proposed system where four different illumination levels of lighting conditions are considered. Experimental results focus on the significance of the proposed audio-visual speaker identification system with various combinations of audio and visual features.


Author(s):  
Changhong Chen ◽  
Jimin Liang ◽  
Heng Zhao ◽  
Haihong Hu

Sign in / Sign up

Export Citation Format

Share Document