vocal emotion recognition
Recently Published Documents


TOTAL DOCUMENTS

43
(FIVE YEARS 15)

H-INDEX

9
(FIVE YEARS 0)

PLoS ONE ◽  
2022 ◽  
Vol 17 (1) ◽  
pp. e0261354
Author(s):  
Mattias Ekberg ◽  
Josefine Andin ◽  
Stefan Stenfelt ◽  
Örjan Dahlström

Previous research has shown deficits in vocal emotion recognition in sub-populations of individuals with hearing loss, making this a high priority research topic. However, previous research has only examined vocal emotion recognition using verbal material, in which emotions are expressed through emotional prosody. There is evidence that older individuals with hearing loss suffer from deficits in general prosody recognition, not specific to emotional prosody. No study has examined the recognition of non-verbal vocalization, which constitutes another important source for the vocal communication of emotions. It might be the case that individuals with hearing loss have specific difficulties in recognizing emotions expressed through prosody in speech, but not non-verbal vocalizations. We aim to examine whether vocal emotion recognition difficulties in middle- aged-to older individuals with sensorineural mild-moderate hearing loss are better explained by deficits in vocal emotion recognition specifically, or deficits in prosody recognition generally by including both sentences and non-verbal expressions. Furthermore a, some of the studies which have concluded that individuals with mild-moderate hearing loss have deficits in vocal emotion recognition ability have also found that the use of hearing aids does not improve recognition accuracy in this group. We aim to examine the effects of linear amplification and audibility on the recognition of different emotions expressed both verbally and non-verbally. Besides examining accuracy for different emotions we will also look at patterns of confusion (which specific emotions are mistaken for other specific emotion and at which rates) during both amplified and non-amplified listening, and we will analyze all material acoustically and relate the acoustic content to performance. Together these analyses will provide clues to effects of amplification on the perception of different emotions. For these purposes, a total of 70 middle-aged-older individuals, half with mild-moderate hearing loss and half with normal hearing will perform a computerized forced-choice vocal emotion recognition task with and without amplification.


2021 ◽  
Vol 8 (11) ◽  
Author(s):  
Leonor Neves ◽  
Marta Martins ◽  
Ana Isabel Correia ◽  
São Luís Castro ◽  
César F. Lima

The human voice is a primary channel for emotional communication. It is often presumed that being able to recognize vocal emotions is important for everyday socio-emotional functioning, but evidence for this assumption remains scarce. Here, we examined relationships between vocal emotion recognition and socio-emotional adjustment in children. The sample included 141 6- to 8-year-old children, and the emotion tasks required them to categorize five emotions (anger, disgust, fear, happiness, sadness, plus neutrality), as conveyed by two types of vocal emotional cues: speech prosody and non-verbal vocalizations such as laughter. Socio-emotional adjustment was evaluated by the children's teachers using a multidimensional questionnaire of self-regulation and social behaviour. Based on frequentist and Bayesian analyses, we found that, for speech prosody, higher emotion recognition related to better general socio-emotional adjustment. This association remained significant even when the children's cognitive ability, age, sex and parental education were held constant. Follow-up analyses indicated that higher emotional prosody recognition was more robustly related to the socio-emotional dimensions of prosocial behaviour and cognitive and behavioural self-regulation. For emotion recognition in non-verbal vocalizations, no associations with socio-emotional adjustment were found. A similar null result was obtained for an additional task focused on facial emotion recognition. Overall, these results support the close link between children's emotional prosody recognition skills and their everyday social behaviour.


Author(s):  
Gonçalo Cosme ◽  
Vânia Tavares ◽  
Guilherme Nobre ◽  
César Lima ◽  
Rui Sá ◽  
...  

AbstractCross-cultural studies of emotion recognition in nonverbal vocalizations not only support the universality hypothesis for its innate features, but also an in-group advantage for culture-dependent features. Nevertheless, in such studies, differences in socio-economic-educational status have not always been accounted for, with idiomatic translation of emotional concepts being a limitation, and the underlying psychophysiological mechanisms still un-researched. We set out to investigate whether native residents from Guinea-Bissau (West African culture) and Portugal (Western European culture)—matched for socio-economic-educational status, sex and language—varied in behavioural and autonomic system response during emotion recognition of nonverbal vocalizations from Portuguese individuals. Overall, Guinea–Bissauans (as out-group) responded significantly less accurately (corrected p < .05), slower, and showed a trend for higher concomitant skin conductance, compared to Portuguese (as in-group)—findings which may indicate a higher cognitive effort stemming from higher difficulty in discerning emotions from another culture. Specifically, accuracy differences were particularly found for pleasure, amusement, and anger, rather than for sadness, relief or fear. Nevertheless, both cultures recognized all emotions above-chance level. The perceived authenticity, measured for the first time in nonverbal cross-cultural research, in the same vocalizations, retrieved no difference between cultures in accuracy, but still a slower response from the out-group. Lastly, we provide—to our knowledge—a first account of how skin conductance response varies between nonverbally vocalized emotions, with significant differences (p < .05). In sum, we provide behavioural and psychophysiological data, demographically and language-matched, that supports cultural and emotion effects on vocal emotion recognition and perceived authenticity, as well as the universality hypothesis.


2021 ◽  
Author(s):  
Leonor Neves ◽  
Marta Martins ◽  
Ana Isabel Correia ◽  
São Luís Castro ◽  
César F. Lima

AbstractThe human voice is a primary channel for emotional communication. It is often presumed that being able to recognise vocal emotions is important for everyday socio-emotional functioning, but direct empirical evidence for this remains scarce. Here, we examined relationships between vocal emotion recognition and socio-emotional adjustment in children. The sample included 6 to 8-year-old children (N = 141). The emotion tasks required them to categorise five emotions conveyed by nonverbal vocalisations (e.g., laughter, crying) and speech prosody: anger, disgust, fear, happiness, sadness, plus neutrality. Socio-emotional adjustment was independently evaluated by the children’s teachers using a multi-dimensional questionnaire of self-regulation and social behaviour. Based on frequentist and Bayesian analyses, we found that higher emotion recognition in speech prosody related to better general socio-emotional adjustment. This association remained significant even after accounting for the children’s general cognitive ability, age, sex, and parental education in multiple regressions. Follow-up analyses indicated that the advantages were particularly robust for the socio-emotional dimensions prosocial behaviour and cognitive and behavioural self-regulation. For emotion recognition in nonverbal vocalisations, no associations with socio-emotional adjustment were found. Overall, these results support the close link between children’s emotional prosody recognition skills and their everyday social behaviour.


PeerJ ◽  
2020 ◽  
Vol 8 ◽  
pp. e8773
Author(s):  
Leanne Nagels ◽  
Etienne Gaudrain ◽  
Deborah Vickers ◽  
Marta Matos Lopes ◽  
Petra Hendriks ◽  
...  

Traditionally, emotion recognition research has primarily used pictures and videos, while audio test materials are not always readily available or are not of good quality, which may be particularly important for studies with hearing-impaired listeners. Here we present a vocal emotion recognition test with pseudospeech productions from multiple speakers expressing three core emotions (happy, angry, and sad): the EmoHI test. The high sound quality recordings make the test suitable for use with populations of children and adults with normal or impaired hearing. Here we present normative data for vocal emotion recognition development in normal-hearing (NH) school-age children using the EmoHI test. Furthermore, we investigated cross-language effects by testing NH Dutch and English children, and the suitability of the EmoHI test for hearing-impaired populations, specifically for prelingually deaf Dutch children with cochlear implants (CIs). Our results show that NH children’s performance improved significantly with age from the youngest age group onwards (4–6 years: 48.9%, on average). However, NH children’s performance did not reach adult-like values (adults: 94.1%) even for the oldest age group tested (10–12 years: 81.1%). Additionally, the effect of age on NH children’s development did not differ across languages. All except one CI child performed at or above chance-level showing the suitability of the EmoHI test. In addition, seven out of 14 CI children performed within the NH age-appropriate range, and nine out of 14 CI children did so when performance was adjusted for hearing age, measured from their age at CI implantation. However, CI children showed great variability in their performance, ranging from ceiling (97.2%) to below chance-level performance (27.8%), which could not be explained by chronological age alone. The strong and consistent development in performance with age, the lack of significant differences across the tested languages for NH children, and the above-chance performance of most CI children affirm the usability and versatility of the EmoHI test.


Sign in / Sign up

Export Citation Format

Share Document