The Analysis and Application of the Third Dimension Speech Signal Spectrum

2013 ◽  
Vol 433-435 ◽  
pp. 376-382
Author(s):  
Gang Niu ◽  
Wen Bin Cao ◽  
Ya Jun Zhang ◽  
Guo Shun Chen ◽  
Qian Kun Yang

As to the speech signal processing problems under the complex voice environment, the characters of speech harmonic and the structure of voiced harmonic are discussed in this paper. In the third dimension frequency domain, quadratic Fourier transform algorithm based on logarithmic amplitude-frequency characteristics is used to propose the concept of the third dimension spectral harmonic ratio" by the behavior of the quasi-sinusoidal characteristics of the speech short-time Fourier spectrum slicing, which is cosider to be an important basis to discriminate speech activity detection. The concept of third dimension spectral harmonic ratio maks the speech signal as a special signal, and separate from the other noise signal completely. For voice noise outside the speech segment, it is no longer need to discuss its own characteristics and without the need for pre-processing to shield the noise accurately, which bring new ideas for the speech signal detection in the complex noise environment.

2010 ◽  
Vol 159 ◽  
pp. 68-71
Author(s):  
Bo Gao ◽  
Zi Ming Kou ◽  
Hong Wei Yan

Speaker Recognition (SR) is an important branch of speech recognition. The current speech signal processing in SR uses short-time processing technique, namely assuming speech signals are short-time stationary. But in fact, the speech signal is non-stationary. The wavelet analysis is a kind of new analyzing tool and is suitable for analyzing non-stationary signal, which has achieved impressive results in the field of signal coding. Based on this, the wavelet analysis theory was introduced into SR research to improve the traditional speech segmentation methods and characteristics parameters. In order to speed the recognition, a kind of SR model based on search tree was also brought out.


2021 ◽  
Vol 263 (2) ◽  
pp. 4570-4580
Author(s):  
Liu Ting ◽  
Luo Xinwei

The recognition accuracy of speech signal and noise signal is greatly affected under low signal-to-noise ratio. The neural network with parameters obtained from the training set can achieve good results in the existing data, but is poor for the samples with different the environmental noises. This method firstly extracts the features based on the physical characteristics of the speech signal, which have good robustness. It takes the 3-second data as samples, judges whether there is speech component in the data under low signal-to-noise ratios, and gives a decision tag for the data. If a reasonable trajectory which is like the trajectory of speech is found, it is judged that there is a speech segment in the 3-second data. Then, the dynamic double threshold processing is used for preliminary detection, and then the global double threshold value is obtained by K-means clustering. Finally, the detection results are obtained by sequential decision. This method has the advantages of low complexity, strong robustness, and adaptability to multi-national languages. The experimental results show that the performance of the method is better than that of traditional methods under various signal-to-noise ratios, and it has good adaptability to multi language.


Doklady BGUIR ◽  
2020 ◽  
pp. 43-51 ◽  
Author(s):  
M. I. Porhun ◽  
M. I. Vashkevich

The purpose of the work was to develop a speech signal processing method for the hearing pathologies correction based on psychoacoustically motivated transposition of high-frequency components of the signal spectrum to the low-frequency region with subsequent frequency-dependent amplification. To achieve this goal, several tasks related to the development of principles of frequency transposition in a speech signal were solved. The adjustment of the method is carried out according to the audiogram of a deaf person. For frequency transposition, source and target frequency bands are selected. The width of the source frequency band is fixed, while the width of the target band is adaptive. Spectrum transposition is performed only for consonants, the perception of which is more difficult for people with hearing loss. The classification of sounds (into vowel-consonant - pause classes) is implemented using one-layer neural network. The feature vector consists of: the zero crossing rate, short-term energy, short-term magnitude, normalized autocorrelation function and the first spectral moment. To preserve the naturalness of transposed sounds, the concept of equal loudness is used. To compensate for the attenuation in the perception of sound by a deaf person, a frequencydependent signal amplification based on an audiogram is used. The effectiveness of the proposed method was verified experimentally using hearing loss effect simulation. The experiment involved 10 people who were given to listen to the recordings passed through the hearing loss model, as well as recordings passed through the hearing loss model with subsequent correction (using proposed method). The results showed that using the proposed hearing correction method improves speech intelligibility on average by 6 %.


2008 ◽  
Vol 1 (2) ◽  
pp. 103-115
Author(s):  
Yinzhi Lai ◽  
Lina Wang ◽  
Ke Cheng ◽  
William Kisaalita

Sign in / Sign up

Export Citation Format

Share Document