vocal expressions
Recently Published Documents


TOTAL DOCUMENTS

99
(FIVE YEARS 23)

H-INDEX

29
(FIVE YEARS 2)

Author(s):  
Vanessa LoBue ◽  
Marissa Ogren

Emotion understanding facilitates the development of healthy social interactions. To develop emotion knowledge, infants and young children must learn to make inferences about people's dynamically changing facial and vocal expressions in the context of their everyday lives. Given that emotional information varies so widely, the emotional input that children receive might particularly shape their emotion understanding over time. This review explores how variation in children's received emotional input shapes their emotion understanding and their emotional behavior over the course of development. Variation in emotional input from caregivers shapes individual differences in infants’ emotion perception and understanding, as well as older children's emotional behavior. Finally, this work can inform policy and focus interventions designed to help infants and young children with social-emotional development.


2021 ◽  
Vol 12 ◽  
Author(s):  
Hisako W. Yamamoto ◽  
Misako Kawahara ◽  
Akihiro Tanaka

Due to the COVID-19 pandemic, the significance of online research has been rising in the field of psychology. However, online experiments with child participants are rare compared to those with adults. In this study, we investigated the validity of web-based experiments with child participants 4–12 years old and adult participants. They performed simple emotional perception tasks in an experiment designed and conducted on the Gorilla Experiment Builder platform. After short communication with each participant via Zoom videoconferencing software, participants performed the auditory task (judging emotion from vocal expression) and the visual task (judging emotion from facial expression). The data collected were compared with data collected in our previous similar laboratory experiment, and similar tendencies were found. For the auditory task in particular, we replicated differences in accuracy perceiving vocal expressions between age groups and also found the same native language advantage. Furthermore, we discuss the possibility of using online cognitive studies for future developmental studies.


2021 ◽  
Author(s):  
Nadia Guerouaou ◽  
Guillaume Vaiva ◽  
Jean-Julien Aucouturier

Rapid technological advances in artificial intelligence are creating opportunities for real-time algorithmic modulations of a person’s facial and vocal expressions, or “deep- fakes”. These developments raises unprecedented societal and ethical questions which, despite much recent public awareness, are still poorly understood from the point of view of moral psychology. We report here on an experimental ethics study conducted on a sample of N=303 participants (predominantly young, western and educated), who evaluated the acceptability of vignettes describing potential applications of expressive voice transformation technology. We found that vocal deep-fakes were generally well accepted in the population, notably in a therapeutic context and for emotions judged otherwise difficult to control, and surprisingly, even if the user lies to their interlocutors about using them. Unlike other emerging technologies like autonomous vehicles, there was no evidence of social dilemma in which one would e.g. accept for others what they resent for themselves. The only real obstacle to the massive deployment of vocal deep-fakes appears to be situations where they are applied to a speaker without their knowing, but even the acceptability of such situations was modulated by individual differences in moral values and attitude towards science-fiction.


Author(s):  
Roza G. Kamiloğlu ◽  
George Boateng ◽  
Alisa Balabanova ◽  
Chuting Cao ◽  
Disa A. Sauter

AbstractThe human voice communicates emotion through two different types of vocalizations: nonverbal vocalizations (brief non-linguistic sounds like laughs) and speech prosody (tone of voice). Research examining recognizability of emotions from the voice has mostly focused on either nonverbal vocalizations or speech prosody, and included few categories of positive emotions. In two preregistered experiments, we compare human listeners’ (total n = 400) recognition performance for 22 positive emotions from nonverbal vocalizations (n = 880) to that from speech prosody (n = 880). The results show that listeners were more accurate in recognizing most positive emotions from nonverbal vocalizations compared to prosodic expressions. Furthermore, acoustic classification experiments with machine learning models demonstrated that positive emotions are expressed with more distinctive acoustic patterns for nonverbal vocalizations as compared to speech prosody. Overall, the results suggest that vocal expressions of positive emotions are communicated more successfully when expressed as nonverbal vocalizations compared to speech prosody.


2021 ◽  
Vol 12 ◽  
Author(s):  
Michal Icht ◽  
Hadar Wiznitser Ressis-tal ◽  
Meir Lotan

Pain is difficult to assess in non-verbal populations such as individuals with intellectual and developmental disability (IDD). Due to scarce research in this area, pain assessment for individuals with IDD is still lacking, leading to maltreatment. To improve medical care for individuals with IDD, immediate, reliable, easy to use pain detection methods should be developed. The goal of this preliminary study was to examine the sensitivity of acoustic features of vocal expressions in identifying pain for adults with IDD, assessing their feasibility as a pain detection indicator for those individuals. Such unique pain related vocal characteristics may be used to develop objective pain detection means. Adults with severe-profound IDD level (N = 9) were recorded in daily activities associated with pain (during diaper changes), or without pain (at rest). Spontaneous vocal expressions were acoustically analyzed to assess several voice characteristics. Analyzing the data revealed that pain related vocal expressions were characterized by significantly higher number of pulses and higher shimmer values relative to no-pain vocal expressions. Pain related productions were also characterized by longer duration, higher jitter and Cepstral Peak Prominence values, lower Harmonic-Noise Ratio, lower difference between the amplitude of the 1st and 2nd harmonic (corrected for vocal tract influence; H1H2c), and higher mean and standard deviation of voice fundamental frequency relative to no-pain related vocal productions, yet these findings were not statistically significant, possibly due to the small and heterogeneous sample. These initial results may prompt further research to explore the possibility to use pain related vocal output as an objective and easily identifiable indicator of pain in this population.


2021 ◽  
Vol 11 (5) ◽  
pp. 605
Author(s):  
Stefan M. Brudzynski

This review summarizes all reported and suspected functions of ultrasonic vocalizations in infant and adult rats. The review leads to the conclusion that all types of ultrasonic vocalizations subserving all functions are vocal expressions of emotional arousal initiated by the activity of the reticular core of the brainstem. The emotional arousal is dichotomic in nature and is initiated by two opposite-in-function ascending reticular systems that are separate from the cognitive reticular activating system. The mesolimbic cholinergic system initiates the aversive state of anxiety with concomitant emission of 22 kHz calls, while the mesolimbic dopaminergic system initiates the appetitive state of hedonia with concomitant emission of 50 kHz vocalizations. These two mutually exclusive arousal systems prepare the animal for two different behavioral outcomes. The transition from broadband infant isolation calls to the well-structured adult types of vocalizations is explained, and the social importance of adult rat vocal communication is emphasized. The association of 22 kHz and 50 kHz vocalizations with aversive and appetitive states, respectively, was utilized in numerous quantitatively measured preclinical models of physiological, psychological, neurological, neuropsychiatric, and neurodevelopmental investigations. The present review should help in understanding and the interpretation of these models in biomedical research.


2021 ◽  
Author(s):  
nadia guerouaou ◽  
Guillaume Vaiva ◽  
Jean-Julien Aucouturier

Rapid technological advances in artificial intelligence are creating opportunities for real-time algorithmic modulations of a person’s facial and vocal expressions, or “deep- fakes”. These developments raises unprecedented societal and ethical questions which, despite much recent public awareness, are still poorly understood from the point of view of moral psychology. We report here on an empirical study conducted on N=303 online participants, who evaluated the acceptability of vignettes describing potential applications of expressive voice transformation technology. We found that vocal deep- fakes were generally well accepted in the population, notably in a therapeutic context and for emotions judged otherwise difficult to control, and surprisingly, even if the user lies to their interlocutors about using them. Unlike other emerging technologies like autonomous vehicles, there was no evidence of social dilemma in which one would e.g. accept for others what they resent for themselves. The only real obstacle to the massive deployment of vocal deep-fakes appears to be situations where they are applied to a speaker without their knowing, but even the acceptability of such situations was modulated by individual differences in moral values and attitude towards science- fiction.


2021 ◽  
Vol 13 (1) ◽  
pp. 51-56
Author(s):  
Marc D. Pell ◽  
Sonja A. Kotz

Neurocognitive models (e.g., Schirmer & Kotz, 2006) have helped to characterize how listeners incrementally derive meaning from vocal expressions of emotion in spoken language, what neural mechanisms are involved at different processing stages, and their relative time course. But how can these insights be applied to communicative situations in which prosody serves a predominantly interpersonal function? This comment examines recent data highlighting the dynamic interplay of prosody and language, when vocal attributes serve the sociopragmatic goals of the speaker or reveal interpersonal information that listeners use to construct a mental representation of what is being communicated. Our comment serves as a beacon to researchers interested in how the neurocognitive system “makes sense” of socioemotive aspects of prosody.


2020 ◽  
pp. 014272372096682
Author(s):  
Lorraine McCune ◽  
Elizabeth M. Lennon ◽  
Anne Greenwood

Pointing has long been considered influential in language acquisition. Certain pre-linguistic vocal expressions may hold even greater value in addressing the transition to language. The goal of the present study is longitudinal evaluation of early communicative development, addressing the influence of pre-linguistic gestures and vocal expressions. This multiple case study report analyzes longitudinal development in five children from 9 to 16 months of age, a critical language transition period. We include gestures of pointing and extending the hand, with interactive as well as request functions. Gestures, communicative grunts, words, and multimodal events combining gesture with vocal accompaniment comprise the data. Results demonstrate group trends and stark individual differences in children’s use of vocal and gestural modalities, and the influence of grunt communication onset on overall communicative frequency in single and combined communicative events. We imbed this analysis within the broader context of mutually interacting variables in a dynamic system. These results argue for greater attention to vocalization as well as gesture in monitoring children’s approach to language development. Based on the role of communicative grunts demonstrated here, this variable should be further studied in both typical and language-delayed children.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
A. S. Villain ◽  
A. Hazard ◽  
M. Danglot ◽  
C. Guérin ◽  
A. Boissy ◽  
...  

Abstract Emotions not only arise in reaction to an event but also while anticipating it, making this context a means of accessing the emotional value of events. Before now, anticipatory studies have rarely considered whether vocalisations carry information about emotional states. We studied both the grunts of piglets and their spatial behaviour as they anticipated two (pseudo)social events known to elicit positive emotions of different intensity: arrival of familiar conspecifics and arrival of a familiar human. Piglets spatially anticipated both pseudo-social contexts, and the spectro temporal features of grunts differed according to the emotional context. Piglets produced low-frequency grunts at a higher rate when anticipating conspecifics compared to anticipating a human. Spectral noise increased when piglets expected conspecifics, whereas the duration and frequency range increased when expecting a human. When the arrival of conspecifics was delayed, the grunt duration increased, whereas when the arrival of the human was delayed, the spectral parameters were comparable to those during isolation. This shows that vocal expressions in piglets during anticipation are specific to the expected reward. Vocal expressions—both their temporal and spectral features- are thus a good way to explore the emotional state of piglets during the anticipation of challenging events.


Sign in / Sign up

Export Citation Format

Share Document