A feature extraction method based on optimized frame algorithm in speech recognition

Author(s):  
Sun Ying ◽  
Zhang Xueying
2015 ◽  
Vol 40 (1) ◽  
pp. 25-31 ◽  
Author(s):  
Sayf A. Majeed ◽  
Hafizah Husain ◽  
Salina A. Samad

Abstract In this paper, a new feature-extraction method is proposed to achieve robustness of speech recognition systems. This method combines the benefits of phase autocorrelation (PAC) with bark wavelet transform. PAC uses the angle to measure correlation instead of the traditional autocorrelation measure, whereas the bark wavelet transform is a special type of wavelet transform that is particularly designed for speech signals. The extracted features from this combined method are called phase autocorrelation bark wavelet transform (PACWT) features. The speech recognition performance of the PACWT features is evaluated and compared to the conventional feature extraction method mel frequency cepstrum coefficients (MFCC) using TI-Digits database under different types of noise and noise levels. This database has been divided into male and female data. The result shows that the word recognition rate using the PACWT features for noisy male data (white noise at 0 dB SNR) is 60%, whereas it is 41.35% for the MFCC features under identical conditions


Author(s):  
Hongbing Zhang

: Nowadays, speech recognition has become one of the important technologies for human-computer interaction. Speech recognition is essentially a process of speech training and pattern recognition, which makes feature extraction technology particularly important. The quality of feature extraction is directly related to the accuracy of speech recognition. Dynamic feature parameters can effectively improve the accuracy of speech recognition, which makes the speech feature dynamic feature extraction has higher research value. The traditional dynamic feature extraction method is easy to generate more redundant information, resulting in low recognition accuracy. Therefore, based on a new speech feature extraction method, a method based on deep learning for speech feature extraction is proposed. Firstly, speech signal is preprocessed by pre-emphasis, windowing, filtering and endpoint detection. Then, the sliding differential cepstral feature (SDC) is extracted, which contains the voice information of the front and back frames. Finally, the feature is used as input to extract the dynamic features that represent the depth essence of speech information through the deep self-encoding neural network. The simulation results show that the dynamic features extracted by in-depth learning have better recognition performance than the original features, and have a good effect in speech recognition.


2020 ◽  
Vol 12 (1) ◽  
pp. 9
Author(s):  
Namkyoung Lee ◽  
Michael Azarian ◽  
Michael Pecht

The performance of a machine learning model depends on the quality of the features used as input to the model. Research into feature extraction methods for convolutional neural network (CNN)-based diagnostics for rotating machinery remains in a developmental stage. In general, the input to CNN-based diagnostics consists of a spectrogram without significant pre-processing. This paper introduces octave-band filtering as a feature extraction method for preprocessing a spectrogram prior to use with CNN. This method is an adaptation of a feature extraction method originally developed for speech recognition. The method developed for diagnosis of machinery faults differs from filtering methods applied to speech recognition in its use of octave bands, to which weighting has been applied that is optimal for machinery diagnosis. Through a case study, the effectiveness of octave-band filtering is demonstrated. The method not only improves the accuracy of the CNN-based diagnostics but also reduces the size of the CNN.


Sign in / Sign up

Export Citation Format

Share Document