scholarly journals Superior Communication of Positive Emotions Through Nonverbal Vocalisations Compared to Speech Prosody

Author(s):  
Roza G. Kamiloğlu ◽  
George Boateng ◽  
Alisa Balabanova ◽  
Chuting Cao ◽  
Disa A. Sauter

AbstractThe human voice communicates emotion through two different types of vocalizations: nonverbal vocalizations (brief non-linguistic sounds like laughs) and speech prosody (tone of voice). Research examining recognizability of emotions from the voice has mostly focused on either nonverbal vocalizations or speech prosody, and included few categories of positive emotions. In two preregistered experiments, we compare human listeners’ (total n = 400) recognition performance for 22 positive emotions from nonverbal vocalizations (n = 880) to that from speech prosody (n = 880). The results show that listeners were more accurate in recognizing most positive emotions from nonverbal vocalizations compared to prosodic expressions. Furthermore, acoustic classification experiments with machine learning models demonstrated that positive emotions are expressed with more distinctive acoustic patterns for nonverbal vocalizations as compared to speech prosody. Overall, the results suggest that vocal expressions of positive emotions are communicated more successfully when expressed as nonverbal vocalizations compared to speech prosody.

2020 ◽  
Author(s):  
Roza Gizem Kamiloglu ◽  
George Boateng ◽  
Alisa Balabanova ◽  
Chuting Cao ◽  
Disa Sauter

The human voice communicates emotion through two different types of vocalisations: nonverbal vocalisations (brief non-linguistic sounds like laughs) and speech prosody (tone of voice). Research examining recognisability of emotions from the voice has mostly focused on either nonverbal vocalisations or speech prosody, and included few categories of positive emotions. In two preregistered experiments, we compare human listeners’ (total n = 400) recognition performance for 22 positive emotions from nonverbal vocalisations (n = 880) to that from speech prosody (n = 880). The results show that listeners were more accurate in recognising most positive emotions from nonverbal vocalisations compared to prosodic expressions. Furthermore, acoustic classification experiments with machine learning models demonstrated that positive emotions are expressed with more distinctive acoustic patterns for nonverbal vocalisations as compared to speech prosody. Overall, the results suggest that vocal expressions of positive emotions are more distinctive in terms of acoustic patterns when expressed as nonverbal vocalisations than as speech prosody, resulting in superior recognition of positive emotions from nonverbal vocalisations.


2020 ◽  
Vol 27 (2) ◽  
pp. 237-265 ◽  
Author(s):  
Roza G. Kamiloğlu ◽  
Agneta H. Fischer ◽  
Disa A. Sauter

AbstractResearchers examining nonverbal communication of emotions are becoming increasingly interested in differentiations between different positive emotional states like interest, relief, and pride. But despite the importance of the voice in communicating emotion in general and positive emotion in particular, there is to date no systematic review of what characterizes vocal expressions of different positive emotions. Furthermore, integration and synthesis of current findings are lacking. In this review, we comprehensively review studies (N = 108) investigating acoustic features relating to specific positive emotions in speech prosody and nonverbal vocalizations. We find that happy voices are generally loud with considerable variability in loudness, have high and variable pitch, and are high in the first two formant frequencies. When specific positive emotions are directly compared with each other, pitch mean, loudness mean, and speech rate differ across positive emotions, with patterns mapping onto clusters of emotions, so-called emotion families. For instance, pitch is higher for epistemological emotions (amusement, interest, relief), moderate for savouring emotions (contentment and pleasure), and lower for a prosocial emotion (admiration). Some, but not all, of the differences in acoustic patterns also map on to differences in arousal levels. We end by pointing to limitations in extant work and making concrete proposals for future research on positive emotions in the voice.


2019 ◽  
Author(s):  
Roza Gizem Kamiloglu ◽  
Agneta Fischer ◽  
Disa Sauter

Researchers examining nonverbal communication of emotions are becoming increasingly interested in differentiations between different positive emotional states like interest, relief, and pride. But despite the importance of the voice in communicating emotion in general, and positive emotion in particular, there is to date no systematic review of what characterizes vocal expressions of different positive emotions. Furthermore, integration and synthesis of current findings are lacking. In this review, we comprehensively review studies (N = 108) investigating acoustic features relating to specific positive emotions in speech prosody and nonverbal vocalisations. We find that happy voices are generally loud with high variability in loudness, have high and variable pitch, and are high in the first two formant frequencies. When specific positive emotions are directly compared with each other, pitch mean, loudness mean, and speech rate differ across positive emotions, with patterns mapping onto clusters of emotions, so-called emotion families. For instance, pitch is higher for epistemological emotions (amusement, interest, relief), moderate for savouring emotions (contentment and pleasure), and lower for prosocial emotion (admiration). Some, but not all, of the differences in acoustic patterns also map on to differences in arousal levels. We end by pointing to limitations in extant work and making concrete proposals for future research on positive emotions in the voice.


2020 ◽  
Vol 36 (1) ◽  
Author(s):  
Nesreen Fathi Mahmoud ◽  
Huda Zahran ◽  
Sherif Abdelmonam

Abstract Background This study focuses on the self-perception of the voice in the elderly as assessed by the Voice-Related Quality of Life (V-RQOL) questionnaire. This work aimed to compare differences in the voice-related quality of life outcomes between (1) elderly with and without voice disorders, (2) female and male elderly with voice disorders, and (3) different types of voice disorders, and to explore the correlation between the V-RQOL and perceptual analysis done by the clinician. Forty-three dysphonic and 44 non-dysphonic elderly filled out the Voice-Related Quality of Life (V-RQOL) protocol that analyzes the impact of dysphonia on life quality. Vocal perceptual assessment of each subject with dysphonia was made by three voice therapists, followed by a flexible nasofibrolaryngoscope. Results A significant statistical difference was found between the means of total V-RQOL scores and its subdomains for each group (dysphonic and non-dysphonic). No significant differences were found between male and female elderly with dysphonia. The statistical analysis showed a significant correlation with the vocal assessment made by the clinicians and the V-RQOL self-assessment made by the subjects. Conclusions This study provides valuable information regarding the risk factors that contribute to vocal quality in the elderly population. Our results revealed that different types of voice disorders are common among the elderly population with significant negative effects on quality of life. It was observed that the poorest score on the V-RQOL was for functional voice disorders, followed by neoplastic lesions, whereas MAPLs had the best score on the V-RQOL.


2018 ◽  
Vol 57 (6) ◽  
pp. 1534-1548 ◽  
Author(s):  
Scotty D. Craig ◽  
Noah L. Schroeder

Technology advances quickly in today’s society. This is particularly true in regard to instructional multimedia. One increasingly important aspect of instructional multimedia design is determining the type of voice that will provide the narration; however, research in the area is dated and limited in scope. Using a randomized pretest–posttest design, we examined the efficacy of learning from an instructional animation where narration was provided by an older text-to-speech engine, a modern text-to-speech engine, or a recorded human voice. In most respects, those who learned from the modern text-to-speech engine were not statistically different in regard to their perceptions, learning outcomes, or cognitive efficiency measures compared with those who learned from the recorded human voice. Our results imply that software technologies may have reached a point where they can credibly and effectively deliver the narration for multimedia learning environments.


The aim of the project is to develop a wheel chair which can be controlled by voice of the person. It is based on the speech recognition model. The project is focused on controlling the wheel chair by human voice. The system is intended to control a wheel seat by utilizing the voice of individual. The structure of this framework will be particularly valuable to the crippled individual and furthermore to the older individuals. It is a booming technology which interfaces human with machine. Smart phone device is the interface. This will allow the challenging people to move freely without the assistant of others. They will get a moral support to live independently .The hardware used are Arduino kit, Microcontroller, Wheelchair and DC motors. DC motor helps for the movement of wheel chair. Ultra Sonic Sensor senses the obstacles between wheelchair and its way.


2021 ◽  
pp. 194084472110428
Author(s):  
Grace O' Grady

One year after beginning a large-scale research inquiry into how young people construct their identities I became ill and subsequently underwent abdominal surgery which triggered an early menopause. The process which was experienced as creatively bruising called to be written as “Artful Autoethnography” using visual images and poetry to tell a “vulnerable, evocative and therapeutic” story of illness, menopause, and their subject positions in intersecting relations of power. The process which was experienced as disempowering called to be performed as an act of resistance and activism. This performance ethnography is in line with the call for qualitative inquirers to move beyond strict methodological boundaries. In particular, the voice of activism in this performance is in the space between data (human voice and visual art pieces) and theory. To this end, and in resisting stratifying institutional/medical discourse, the performance attempts to create a space for a merger of ethnography and activism in public/private life.


2015 ◽  
Vol 31 (2) ◽  
pp. 298-311 ◽  
Author(s):  
Melanie Soderstrom ◽  
Melissa Reimchen ◽  
Disa Sauter ◽  
James L. Morgan

2020 ◽  
Vol 117 (21) ◽  
pp. 11364-11367 ◽  
Author(s):  
Wim Pouw ◽  
Alexandra Paxton ◽  
Steven J. Harrison ◽  
James A. Dixon

We show that the human voice has complex acoustic qualities that are directly coupled to peripheral musculoskeletal tensioning of the body, such as subtle wrist movements. In this study, human vocalizers produced a steady-state vocalization while rhythmically moving the wrist or the arm at different tempos. Although listeners could only hear and not see the vocalizer, they were able to completely synchronize their own rhythmic wrist or arm movement with the movement of the vocalizer which they perceived in the voice acoustics. This study corroborates recent evidence suggesting that the human voice is constrained by bodily tensioning affecting the respiratory–vocal system. The current results show that the human voice contains a bodily imprint that is directly informative for the interpersonal perception of another’s dynamic physical states.


2019 ◽  
Vol 37 (2) ◽  
pp. 134-146
Author(s):  
Weixia Zhang ◽  
Fang Liu ◽  
Linshu Zhou ◽  
Wanqi Wang ◽  
Hanyuan Jiang ◽  
...  

Timbre is an important factor that affects the perception of emotion in music. To date, little is known about the effects of timbre on neural responses to musical emotion. To address this issue, we used ERPs to investigate whether there are different neural responses to musical emotion when the same melodies are presented in different timbres. With a cross-modal affective priming paradigm, target faces were primed by affectively congruent or incongruent melodies without lyrics presented in the violin, flute, and voice. Results showed a larger P3 and a larger left anterior distributed LPC in response to affectively incongruent versus congruent trials in the voice version. For the flute version, however, only the LPC effect was found, which was distributed over centro-parietal electrodes. Unlike the voice and flute versions, an N400 effect was observed in the violin version. These findings revealed different patterns of neural responses to musical emotion when the same melodies were presented in different timbres, and provide evidence for the hypothesis that there are specialized neural responses to the human voice.


Sign in / Sign up

Export Citation Format

Share Document