Statistical estimators of a periodically correlated random process for a voiced speech signal

2003 ◽  
Vol 113 (4) ◽  
pp. 2271-2271 ◽  
Author(s):  
Lesya B. Chorna
2007 ◽  
Vol 2007 ◽  
pp. 1-5 ◽  
Author(s):  
Aïcha Bouzid ◽  
Noureddine Ellouze

This paper describes a multiscale product method (MPM) for open quotient measure in voiced speech. The method is based on determining the glottal closing and opening instants. The proposed approach consists of making the products of wavelet transform of speech signal at different scales in order to enhance the edge detection and parameter estimation. We show that the proposed method is effective and robust for detecting speech singularity. Accurate estimation of glottal closing instants (GCIs) and opening instants (GOIs) is important in a wide range of speech processing tasks. In this paper, accurate estimation of GCIs and GOIs is used to measure the local open quotient (Oq) which is the ratio of the open time by the pitch period. Multiscale product operates automatically on speech signal; the reference electroglottogram (EGG) signal is used for performance evaluation. The ratio of good GCI detection is 95.5% and that of GOI is 76%. The pitch period relative error is 2.6% and the open phase relative error is 5.6%. The relative error measured on open quotient reaches 3% for the whole Keele database.


2014 ◽  
Vol 8 (1) ◽  
pp. 508-511
Author(s):  
Zhongbao Chen ◽  
Zhigang Fang ◽  
Jie Xu ◽  
Pengying Du ◽  
Xiaoping Luo

Speech can be broadly categorized into voiceless, voiced, and mute signal, in which voiced speech can be further classified into vowel and voiced consonant. With the ever increasing demand of the speech synthesis applications, it is urgent to develop an effective classification method to differentiate vowel and voiced consonant signal since they are two distinct components that affect the naturalness of the synthetic speech signal. State-of-the-arts algorithms for speech signal classification are effective in classifying voiceless, voiced and mute speech signal, however, not effective in further classifying the voiced signal. In view of the issue, a new algorithm for speech classification based on Gaussian Mixture Model (GMM) is proposed, which can directly classify a speech into voiceless, voiced consonant, vowel and mute signal. Simulation results demonstrate that the proposed algorithm is effective even under the noisy environments.


2012 ◽  
Vol 532-533 ◽  
pp. 1253-1257
Author(s):  
Li Hai Yao ◽  
Jie Xu ◽  
Hao Jiang

Speech can be broadly categorized into voiceless, voiced, and mute signal, in which voiced speech can be further classified into vowel and voiced consonant. With the ever increasing demand of the speech synthesis applications, it is urgent to develop an effective classification method to differentiate vowel and voiced consonant signal since they are two distinct components that affect the naturalness of the synthetic speech signal. State-of-the-arts algorithms for speech signal classification are effective in classifying voiceless, voiced and mute speech signal, however, not effective in further classifying the voiced signal. In view of the issue, a new algorithm for speech classification based on Gaussian Mixture Model (GMM) is proposed, which can directly classify a speech into voiceless, voiced consonant, vowel and mute signal. Specifically, a new speech feature is proposed, and the GMM is also modified for speech classification. Simulation results demonstrate that the proposed algorithm is effective even under the noisy environments.


Sign in / Sign up

Export Citation Format

Share Document