scholarly journals English Speech Feature Recognition Based On Digital Means

Author(s):  
Yuji miao ◽  
Yanan Huang ◽  
Zhenjing Da

Abstract In order to improve the effect of English speech recognition, based on digital means, this paper combines the actual needs of English speech feature recognition to improve the digital algorithm. Moreover, this paper combines fuzzy recognition algorithm to analyze English speech features, and analyzes the shortcomings of traditional algorithms, and proposes the fuzzy digitized English speech recognition algorithm, and builds an English speech feature recognition model on this basis. In addition, this paper conducts time-frequency analysis on chaotic signals and speech signals, eliminates noise in English speech features, improves the recognition effect of English speech features, and builds an English speech feature recognition system based on digital means. Finally, this paper conducts grouping experiments by inputting students' English pronunciation forms, and counts the results of the experiments to test the performance of the system. The research results show that the method proposed in this paper has a certain effect.

2010 ◽  
Vol 44-47 ◽  
pp. 1422-1426
Author(s):  
Mei Juan Gao ◽  
Zhi Xin Yang

In this paper, based on the study of two speech recognition algorithms, two designs of speech recognition system are given to realize this isolated speech recognition mobile robot control system based on ARM9 processor. The speech recognition process includes pretreatment of speech signal, characteristic extrication, pattern matching and post-processing. Mel-Frequency cepstrum coefficients (MFCC) and linear prediction cepstrum coefficients (LPCC) are the two most common parameters. Through analysis and comparison the parameters, MFCC shows more noise immunity than LPCC, so MFCC is selected as the characteristic parameters. Both dynamic time warping (DTW) and hidden markov model (HMM) are commonly used algorithm. For the different characteristics of DTW and HMM recognition algorithm, two different programs were designed for mobile robot control system. The effect and speed of the two speech recognition system were analyzed and compared.


Author(s):  
Mohammed Rokibul Alam Kotwal ◽  
Foyzul Hassan ◽  
Mohammad Nurul Huda

This chapter presents Bangla (widely known as Bengali) Automatic Speech Recognition (ASR) techniques by evaluating the different speech features, such as Mel Frequency Cepstral Coefficients (MFCCs), Local Features (LFs), phoneme probabilities extracted by time delay artificial neural networks of different architectures. Moreover, canonicalization of speech features is also performed for Gender-Independent (GI) ASR. In the canonicalization process, the authors have designed three classifiers by male, female, and GI speakers, and extracted the output probabilities from these classifiers for measuring the maximum. The maximization of output probabilities for each speech file provides higher correctness and accuracies for GI speech recognition. Besides, dynamic parameters (velocity and acceleration coefficients) are also used in the experiments for obtaining higher accuracy in phoneme recognition. From the experiments, it is also shown that dynamic parameters with hybrid features also increase the phoneme recognition performance in a certain extent. These parameters not only increase the accuracy of the ASR system, but also reduce the computation complexity of Hidden Markov Model (HMM)-based classifiers with fewer mixture components.


2014 ◽  
Vol 571-572 ◽  
pp. 205-208
Author(s):  
Guan Yu Li ◽  
Hong Zhi Yu ◽  
Yong Hong Li ◽  
Ning Ma

Speech feature extraction is discussed. Mel frequency cepstral coefficients (MFCC) and perceptual linear prediction coefficient (PLP) method is analyzed. These two types of features are extracted in Lhasa large vocabulary continuous speech recognition system. Then the recognition results are compared.


2020 ◽  
Vol 10 (13) ◽  
pp. 4602
Author(s):  
Moa Lee ◽  
Joon-Hyuk Chang

Speech recognition for intelligent robots seems to suffer from performance degradation due to ego-noise. The ego-noise is caused by the motors, fans, and mechanical parts inside the intelligent robots especially when the robot moves or shakes its body. To overcome the problems caused by the ego-noise, we propose a robust speech recognition algorithm that uses motor-state information of the robot as an auxiliary feature. For this, we use two deep neural networks (DNN) in this paper. Firstly, we design the latent features using a bottleneck layer, one of the internal layers having a smaller number of hidden units relative to the other layers, to represent whether the motor is operating or not. The latent features maximizing the representation of the motor-state information are generated by taking the motor data and acoustic features as the input of the first DNN. Secondly, once the motor-state dependent latent features are designed at the first DNN, the second DNN, accounting for acoustic modeling, receives the latent features as the input along with the acoustic features. We evaluated the proposed system on LibriSpeech database. The proposed network enables efficient compression of the acoustic and motor-state information, and the resulting word error rate (WER) are superior to that of a conventional speech recognition system.


Author(s):  
Youllia Indrawaty Nurhasanah ◽  
Irma Amelia Dewi ◽  
Bagus Ade Saputro

Historically, the study of Qur'an in Indonesia evolved along with the spread of Islam. Learning methods of reading the Qur'an have been found ranging from al-Baghdadi, al-Barqi, Qiraati, Iqro', Human, Tartila, and others, which can make it easier to learn to read the Qur'an. Currently, the development of speech recognition technology can be used for the detection of Iqro vol 3 reading pronunciations. Speech recognition consists of two general stages of feature extraction and speech matching. The feature extraction step is used to derive speech-feature and speech-matching stages to compare compatibility between test sound and train voice. The speech recognition method used to recognize Iqro readings is extracting speech signal features using Mel Frequency Cepstral Coefficient (MFCC) and classifying them using Vector Quantization (VQ) to get the appropriate speech results. The result of testing for speech recognition system of Iqro reading has been tested for 30 peoples as a sample of data and there are 6 utterances indicating the information failed, so the system has a success rate of 80%.


1970 ◽  
Vol 110 (4) ◽  
pp. 113-116 ◽  
Author(s):  
R. Lileikyte ◽  
L. Telksnys

The best feature set selection is the key of successful speech recognition system. Quality measure is needed to characterize the chosen feature set. Variety of feature quality metrics are proposed by other authors. However, no guidance is given to choose the appropriate metric. Also no metrics investigations for speech features were made. In the paper the methodology for quality estimation of speech features is presented. Metrics have to be chosen on the ground of their correlation with classification results. Linear Frequency Cepstrum (LFCC), Mel Frequency Cepstrum (MFCC), Perceptual Linear Prediction (PLP) analyses were selected for experiment. The most proper metric was chosen in combination with Dynamic Time Warping (DTW) classifier. Experimental investigation results are presented. Ill. 5, bibl. 18, tabl. 3 (in English; abstracts in English and Lithuanian).http://dx.doi.org/10.5755/j01.eee.110.4.302


2020 ◽  
pp. 1-11
Author(s):  
Qian Hou ◽  
Cuijuan Li ◽  
Min Kang ◽  
Xin Zhao

English feature recognition has a certain influence on the development of English intelligent technology. In particular, the speech recognition technology has the problem of accuracy when performing English feature recognition. In order to improve the English feature recognition effect, this study takes the intelligent learning algorithm as the system algorithm and combines support vector machines to construct an English feature recognition system and uses linear classifiers and nonlinear classifiers to complete the relevant work of subjective recognition. Moreover, spectral subtraction is introduced in the front end of feature extraction, and the spectral amplitude of the noise-free signal is subtracted from the spectral amplitude of the noise to obtain the spectral amplitude of the pure signal. By taking advantage of the insensitivity of speech to the phase, the phase angle information before spectral subtraction is directly used to reconstruct the signal after spectral subtraction to obtain the denoised speech. In addition, this study uses a nonlinear power function that simulates the hearing characteristics of the human ear to extract the features of the denoised speech signal and combines the English features to expand the recognition. Finally, this study analyzes the performance of the algorithm proposed in this study through comparative experiments. The research results show that the algorithm in this paper has a certain effect.


Sign in / Sign up

Export Citation Format

Share Document