Acoustic Features of Filled Pauses in Polish Task-Oriented Dialogues

2013 ◽  
Vol 38 (1) ◽  
pp. 63-73
Author(s):  
Maciej Karpiński

Abstract Filled pauses (FPs) have proved to be more than valuable cues to speech production processes and important units in discourse analysis. Some aspects of their form and occurrence patterns have been shown to be speaker- and language-specific. In the present study, basic acoustic properties of FPs in Polish task-oriented dialogues are explored. A set of FPs was extracted from a corpus of twenty task- oriented dialogues on the basis of available annotations. After initial scrutiny and selection, a subset of the signals underwent a series of pitch, formant frequency and voice quality analyses. A significant amount of variation found in the realisations of FPs justifies their potential application in speaker recognition systems. Regular monosegmental FPs were confirmed to show relatively stable basic acoustic parameters, which allows for their easy identification and measurements but it may result in less significant differences among the speakers.

1992 ◽  
Vol 35 (3) ◽  
pp. 512-520 ◽  
Author(s):  
Jody Kreiman ◽  
Bruce R. Gerratt ◽  
Kristin Precoda ◽  
Gerald S. Berke

Sixteen listeners (10 expert, 6 naive) judged the dissimilarity of pairs of voices drawn from pathological and normal populations. Separate nonmetric multidimensional scaling solutions were calculated for each listener and voice set. The correlations between individual listeners’ dissimilarity ratings were low However, scaling solutions indicated that each subject judged the voices in a reliable, meaningful way. Listeners differed more from one another in their judgments of the pathological voices (which varied widely on a number of acoustic parameters) than they did for the normal voices (which formed a much more homogeneous set acoustically). The acoustic features listeners used to judge dissimilarity were predictable from the characteristics of the stimulus sets’ only parameters that showed substantial variability were perceptually salient across listeners. These results are consistent with prototype models of voice perception They suggest that traditional means of assessing listener reliability n voice perception tasks may not be appropriate, and highlight the importance of using explicit comparisons between stimuli when studying voice quality perception


2004 ◽  
Author(s):  
Raymond E. Slyh ◽  
Eric G. Hansen ◽  
Timothy R. Anderson

2012 ◽  
Vol 121 (8) ◽  
pp. 539-548 ◽  
Author(s):  
Soren Y. Lowell ◽  
Richard T. Kelley ◽  
Shaheen N. Awan ◽  
Raymond H. Colton ◽  
Natalie H. Chan

The performance of Mel scale and Bark scale is evaluated for text-independent speaker identification system. Mel scale and Bark scale are designed according to human auditory system. The filter bank structure is defined using Mel and Bark scales for speech and speaker recognition systems to extract speaker specific speech features. In this work, performance of Mel scale and Bark scale is evaluated for text-independent speaker identification system. It is found that Bark scale centre frequencies are more effective than Mel scale centre frequencies in case of Indian dialect speaker databases. Mel scale is defined as per interpretation of pitch by human ear and Bark scale is based on critical band selectivity at which loudness becomes significantly different. The recognition rate achieved using Bark scale filter bank is 96% for AISSMSIOIT database and 95% for Marathi database.


2021 ◽  
Vol 11 (21) ◽  
pp. 10079
Author(s):  
Muhammad Firoz Mridha ◽  
Abu Quwsar Ohi ◽  
Muhammad Mostafa Monowar ◽  
Md. Abdul Hamid ◽  
Md. Rashedul Islam ◽  
...  

Speaker recognition deals with recognizing speakers by their speech. Most speaker recognition systems are built upon two stages, the first stage extracts low dimensional correlation embeddings from speech, and the second performs the classification task. The robustness of a speaker recognition system mainly depends on the extraction process of speech embeddings, which are primarily pre-trained on a large-scale dataset. As the embedding systems are pre-trained, the performance of speaker recognition models greatly depends on domain adaptation policy, which may reduce if trained using inadequate data. This paper introduces a speaker recognition strategy dealing with unlabeled data, which generates clusterable embedding vectors from small fixed-size speech frames. The unsupervised training strategy involves an assumption that a small speech segment should include a single speaker. Depending on such a belief, a pairwise constraint is constructed with noise augmentation policies, used to train AutoEmbedder architecture that generates speaker embeddings. Without relying on domain adaption policy, the process unsupervisely produces clusterable speaker embeddings, termed unsupervised vectors (u-vectors). The evaluation is concluded in two popular speaker recognition datasets for English language, TIMIT, and LibriSpeech. Also, a Bengali dataset is included to illustrate the diversity of the domain shifts for speaker recognition systems. Finally, we conclude that the proposed approach achieves satisfactory performance using pairwise architectures.


Author(s):  
Syed Akhter Hossain ◽  
M. Lutfar Rahman ◽  
Faruk Ahmed ◽  
M. Abdus Sobhan

The aim of this chapter is to clearly understand the salient features of Bangla vowels and the sources of acoustic variability in Bangla vowels, and to suggest classification of vowels based on normalized acoustic parameters. Possible applications in automatic speech recognition and speech enhancement have made the classification of vowels an important problem to study. However, Bangla vowels spoken by different native speakers show great variations in their respective formant values. This brings further complications in the acoustic comparison of vowels due to different dialect and language backgrounds of the speakers. This variation necessitates the use of normalization procedures to remove the effect of non-linguistic factors. Although several researchers found a number of acoustical and perceptual correlates of vowels, acoustic parameters that work well in a speaker-independent manner are yet to be found. Besides, study of acoustic features of Bangla dental consonants to identify the spectral differences between different consonants and to parameterize them for the synthesis of the segments is another problem area for study. The extracted features for both Bangla vowels and dental consonants are tested and found with good synthetic representations that demonstrate the quality of acoustic features.


2009 ◽  
Vol 40 (1) ◽  
pp. 7
Author(s):  
Sara Ferrari ◽  
Mitchell Silva ◽  
Vittorio Sala ◽  
Daniel Berckmans ◽  
Marcella Guarino

Cough is the element for monitoring and diagnosis of respiratory disease cause of mortality and loss of productivity in pig houses. In order to prevent as much as possible the outbreak of such diseases the aim of this research is to describe acoustic features of cough sounds originating from infections due to Actinobacillosis and Pasteurellosis and to compare them with healthy cough sounds provoked by inhalation of citric acid. The acoustic parameters investigated are peak frequency [Hz] and duration of cough signals. The differences resulting from the cough sound analysis confirmed a variability in acoustics parameters according to a state of health or disease in the animals. Sound analysis provides physic acoustic features that can be used as tool to label and detect cough in a automatic monitoring system applied in farms.


Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-9
Author(s):  
Jiang Lin ◽  
Yi Yumei ◽  
Zhang Maosheng ◽  
Chen Defeng ◽  
Wang Chao ◽  
...  

In speaker recognition systems, feature extraction is a challenging task under environment noise conditions. To improve the robustness of the feature, we proposed a multiscale chaotic feature for speaker recognition. We use a multiresolution analysis technique to capture more finer information on different speakers in the frequency domain. Then, we extracted the speech chaotic characteristics based on the nonlinear dynamic model, which helps to improve the discrimination of features. Finally, we use a GMM-UBM model to develop a speaker recognition system. Our experimental results verified its good performance. Under clean speech and noise speech conditions, the ERR value of our method is reduced by 13.94% and 26.5% compared with the state-of-the-art method, respectively.


Sign in / Sign up

Export Citation Format

Share Document