Exploring human voice prosodic features and the interaction between the excitation signal and vocal tract for Assamese speech

Author(s):  
Sippee Bharadwaj ◽  
Purnendu Bikash Acharjee
2021 ◽  
Vol 11 (4) ◽  
pp. 1970
Author(s):  
Martin Lasota ◽  
Petr Šidlof ◽  
Manfred Kaltenbacher ◽  
Stefan Schoder

In an aeroacoustic simulation of human voice production, the effect of the sub-grid scale (SGS) model on the acoustic spectrum was investigated. In the first step, incompressible airflow in a 3D model of larynx with vocal folds undergoing prescribed two-degree-of-freedom oscillation was simulated by laminar and Large-Eddy Simulations (LES), using the One-Equation and Wall-Adaptive Local-Eddy (WALE) SGS models. Second, the aeroacoustic sources and the sound propagation in a domain composed of the larynx and vocal tract were computed by the Perturbed Convective Wave Equation (PCWE) for vowels [u:] and [i:]. The results show that the SGS model has a significant impact not only on the flow field, but also on the spectrum of the sound sampled 1 cm downstream of the lips. With the WALE model, which is known to handle the near-wall and high-shear regions more precisely, the simulations predict significantly higher peak volumetric flow rates of air than those of the One-Equation model, only slightly lower than the laminar simulation. The usage of the WALE SGS model also results in higher sound pressure levels of the higher harmonic frequencies.


2020 ◽  
Vol 17 (9) ◽  
pp. 4244-4247
Author(s):  
Vybhav Jain ◽  
S. B. Rajeshwari ◽  
Jagadish S. Kallimani

Emotion Analysis is a dynamic field of research with the aim to provide a method to recognize the emotions of a person only from their voice. It is more famously recognized as the Speech Emotion Recognition (SER) problem. This problem has been studied upon from more than a decade with results coming from either Voice Analysis or Text Analysis. Individually, both these methods have shown a good accuracy up till now. But, the use of both of these methods in unison has showed a much more better result than either one of those parts considered individually. When different people of different age groups are talking, it is important to understand their emotions behind what they say as this will in turn help us in reacting better. To try and achieve this, the paper implements a model which performs Emotion Analysis based on both Tone and Text Analysis. The prosodic features of the tone are analyzed and then the speech is converted to text. Once the text has been extracted from the speech, Sentiment Analysis is done on the extracted text to further improve the accuracy of the Emotion Recognition.


2019 ◽  
Vol 210 ◽  
pp. 38-45 ◽  
Author(s):  
Sandy Bensoussan ◽  
Raphaëlle Tigeot ◽  
Alban Lemasson ◽  
Marie-Christine Meunier-Salaün ◽  
Céline Tallet

2017 ◽  
Vol 41 ◽  
pp. 116-127 ◽  
Author(s):  
Peter Birkholz ◽  
Lucia Martin ◽  
Yi Xu ◽  
Stefan Scherbaum ◽  
Christiane Neuschaefer-Rube

2005 ◽  
Vol 40 ◽  
pp. 33-43
Author(s):  
Alban Gebler ◽  
Roland Frey

In order to understand the functional morphology of the human voice producing system, we are in need of data on the vocal tract anatomy of other mammalian species. The larynges and vocal tracts of four species of Artiodactyla were investigated in combination with acoustic analyses of their respective calls. Different evolutionary specializations of laryngeal characters may lead to similar effects on sound production. In the investigated species, such specializations are: the elongation and mass increase of the vocal folds, the volume increase of the laryngeal vestibulum by an enlarged thyroid cartilage and the formation of laryngeal ventricles. Both the elongation of the vocal folds and the increase of the oscillating masses lower the fundamental frequency. The influence of an increased volume of the laryngeal vestibulum on sound production remains unclear. The anatomical and acoustic results are presented together with considerations about the habitats and the mating systems of the respective species.  


Author(s):  
Hemanta Kumar Palo ◽  
Debasis Behera

Emotions are age, gender, culture, speaker, and situationally dependent. Due to an underdeveloped vocal tract or the vocal folds of children and a weak or aged speech production mechanism of older adults, the acoustic properties differ with the age of a person. In this sense, the features describing the age and emotionally relevant information of human voice also differ. This motivates the authors to investigate a number of issues related to database collection, feature extraction, and clustering algorithms for effective characterization and identification of human age of his or her paralanguage information. The prosodic features such as the speech rate, pitch, log energy, and spectral parameters have been explored to characterize the chosen emotional utterances whereas the efficient K-means and Fuzzy C-means clustering algorithms have been used to partition age-related emotional features for a better understanding of the related issues.


2018 ◽  
Vol 115 (23) ◽  
pp. 5926-5931 ◽  
Author(s):  
Hwan-Ching Tai ◽  
Yen-Ping Shen ◽  
Jer-Horng Lin ◽  
Dai-Ting Chung

The shape and design of the modern violin are largely influenced by two makers from Cremona, Italy: The instrument was invented by Andrea Amati and then improved by Antonio Stradivari. Although the construction methods of Amati and Stradivari have been carefully examined, the underlying acoustic qualities which contribute to their popularity are little understood. According to Geminiani, a Baroque violinist, the ideal violin tone should “rival the most perfect human voice.” To investigate whether Amati and Stradivari violins produce voice-like features, we recorded the scales of 15 antique Italian violins as well as male and female singers. The frequency response curves are similar between the Andrea Amati violin and human singers, up to ∼4.2 kHz. By linear predictive coding analyses, the first two formants of the Amati exhibit vowel-like qualities (F1/F2 = 503/1,583 Hz), mapping to the central region on the vowel diagram. Its third and fourth formants (F3/F4 = 2,602/3,731 Hz) resemble those produced by male singers. Using F1 to F4 values to estimate the corresponding vocal tract length, we observed that antique Italian violins generally resemble basses/baritones, but Stradivari violins are closer to tenors/altos. Furthermore, the vowel qualities of Stradivari violins show reduced backness and height. The unique formant properties displayed by Stradivari violins may represent the acoustic correlate of their distinctive brilliance perceived by musicians. Our data demonstrate that the pioneering designs of Cremonese violins exhibit voice-like qualities in their acoustic output.


2014 ◽  
Vol 2014 ◽  
pp. 1-17 ◽  
Author(s):  
Puneet Kumar Mongia ◽  
R. K. Sharma

In this study the principal focus is to examine the influence of psychological stress (both positive and negative stress) on the human articulation and to determine the vocal tract transfer function of an individual using inverse filtering technique. Both of these analyses are carried out by estimating various voice parameters. The outcomes of the analysis of psychological stress indicate that all the voice parameters are affected due to the influence of stress on humans. About 35 out of 51 parameters follow a unique course of variation from normal to positive and negative stress in 32% of the total analyzed signals. The upshot of the analysis is to determine the vocal tract transfer function for each vowel for an individual. The analysis indicates that it can be computed by estimating the mean of the pole zero plots of that individual’s vocal tract estimated for the whole day. Besides this, an analysis is presented to find the relationship between the LPC coefficients of the vocal tract and the vocal tract cavities. The results of the analysis indicate that all the LPC coefficients of the vocal tract are affected due to change in the position of any cavity.


2020 ◽  
Vol 63 (4) ◽  
pp. 931-947
Author(s):  
Teresa L. D. Hardy ◽  
Carol A. Boliek ◽  
Daniel Aalto ◽  
Justin Lewicke ◽  
Kristopher Wells ◽  
...  

Purpose The purpose of this study was twofold: (a) to identify a set of communication-based predictors (including both acoustic and gestural variables) of masculinity–femininity ratings and (b) to explore differences in ratings between audio and audiovisual presentation modes for transgender and cisgender communicators. Method The voices and gestures of a group of cisgender men and women ( n = 10 of each) and transgender women ( n = 20) communicators were recorded while they recounted the story of a cartoon using acoustic and motion capture recording systems. A total of 17 acoustic and gestural variables were measured from these recordings. A group of observers ( n = 20) rated each communicator's masculinity–femininity based on 30- to 45-s samples of the cartoon description presented in three modes: audio, visual, and audio visual. Visual and audiovisual stimuli contained point light displays standardized for size. Ratings were made using a direct magnitude estimation scale without modulus. Communication-based predictors of masculinity–femininity ratings were identified using multiple regression, and analysis of variance was used to determine the effect of presentation mode on perceptual ratings. Results Fundamental frequency, average vowel formant, and sound pressure level were identified as significant predictors of masculinity–femininity ratings for these communicators. Communicators were rated significantly more feminine in the audio than the audiovisual mode and unreliably in the visual-only mode. Conclusions Both study purposes were met. Results support continued emphasis on fundamental frequency and vocal tract resonance in voice and communication modification training with transgender individuals and provide evidence for the potential benefit of modifying sound pressure level, especially when a masculine presentation is desired.


Sign in / Sign up

Export Citation Format

Share Document