scholarly journals Children’s recognition of emotion in music and speech

2018 ◽  
Vol 1 ◽  
pp. 205920431876265 ◽  
Author(s):  
Dianna Vidas ◽  
Genevieve A. Dingle ◽  
Nicole L. Nelson

The acoustic cues that convey emotion in speech are similar to those that convey emotion in music, and recognition of emotion in both of these types of cue recruits overlapping networks in the brain. Given the similarities between music and speech prosody, developmental research is uniquely positioned to determine whether recognition of these cues develops in parallel. In the present study, we asked 60 children aged 6 to 11 years, and 51 university students, to judge the emotions of 10 musical excerpts, 10 inflected speech clips, and 10 affect burst clips. We presented stimuli intended to convey happiness, sadness, anger, fear, and pride. Each emotion was presented twice per type of stimulus. We found that recognition of emotions in music and speech developed in parallel, and adult-levels of recognition develop later for these stimuli than for affect bursts. We also found that sad stimuli were most easily recognised, followed by happiness, fear, and then anger. In addition, we found that recognition of emotion in speech and affect bursts can predict emotion recognition in music stimuli independently of age and musical training. Finally, although proud speech and affect bursts were not well recognised, children aged eight years and older showed adult-like responses in recognition of proud music.

2018 ◽  
Vol 48 (1) ◽  
pp. 150-159
Author(s):  
Jonathan M. P. Wilbiks ◽  
Sean Hutchins

In previous research, there exists some debate about the effects of musical training on memory for verbal material. The current research examines this relationship, while also considering musical training effects on memory for musical excerpts. Twenty individuals with musical training were tested and their results were compared to 20 age-matched individuals with no musical experience. Musically trained individuals demonstrated a higher level of memory for classical musical excerpts, with no significant differences for popular musical excerpts or for words. These findings are in support of previous research showing that while music and words overlap in terms of their processing in the brain, there is not necessarily a facilitative effect between training in one domain and performance in the other.


2015 ◽  
Vol 2015 ◽  
pp. 1-9 ◽  
Author(s):  
Kanako Sato ◽  
Eiji Kirino ◽  
Shoji Tanaka

The brain changes flexibly due to various experiences during the developmental stages of life. Previous voxel-based morphometry (VBM) studies have shown volumetric differences between musicians and nonmusicians in several brain regions including the superior temporal gyrus, sensorimotor areas, and superior parietal cortex. However, the reported brain regions depend on the study and are not necessarily consistent. By VBM, we investigated the effect of musical training on the brain structure by comparing university students majoring in music with those majoring in nonmusic disciplines. All participants were right-handed healthy Japanese females. We divided the nonmusic students into two groups and therefore examined three groups: music expert (ME), music hobby (MH), and nonmusic (NM) group. VBM showed that the ME group had the largest gray matter volumes in the right inferior frontal gyrus (IFG; BA 44), left middle occipital gyrus (BA 18), and bilateral lingual gyrus. These differences are considered to be caused by neuroplasticity during long and continuous musical training periods because the MH group showed intermediate volumes in these regions.


2016 ◽  
Vol 45 (5) ◽  
pp. 752-760 ◽  
Author(s):  
Paulo E. Andrade ◽  
Patrícia Vanzella ◽  
Olga V. C. A. Andrade ◽  
E. Glenn Schellenberg

Brazilian listeners ( N = 303) were asked to identify emotions conveyed in 1-min instrumental excerpts from Wagner’s operas. Participants included musically untrained 7- to 10-year-olds and university students in music (musicians) or science (nonmusicians). After hearing each of eight different excerpts, listeners made a forced-choice judgment about which of eight emotions best matched the excerpt. The excerpts and emotions were chosen so that two were in each of four quadrants in two-dimensional space as defined by arousal and valence. Listeners of all ages performed at above-chance levels, which means that complex, unfamiliar musical materials from a different century and culture are nevertheless meaningful for young children. In fact, children performed similarly to adult nonmusicians. There was age-related improvement among children, however, and adult musicians performed best of all. As in previous research that used simpler musical excerpts, effects due to age and music training were due primarily to improvements in selecting the appropriate valence. That is, even 10-year-olds with no music training were as likely as adult musicians to match a high- or low-arousal excerpt with a high- or low-arousal emotion, respectively. Performance was independent of general cognitive ability as measured by academic achievement but correlated positively with basic pitch-perception skills.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Pragati Patel ◽  
Raghunandan R ◽  
Ramesh Naidu Annavarapu

AbstractMany studies on brain–computer interface (BCI) have sought to understand the emotional state of the user to provide a reliable link between humans and machines. Advanced neuroimaging methods like electroencephalography (EEG) have enabled us to replicate and understand a wide range of human emotions more precisely. This physiological signal, i.e., EEG-based method is in stark comparison to traditional non-physiological signal-based methods and has been shown to perform better. EEG closely measures the electrical activities of the brain (a nonlinear system) and hence entropy proves to be an efficient feature in extracting meaningful information from raw brain waves. This review aims to give a brief summary of various entropy-based methods used for emotion classification hence providing insights into EEG-based emotion recognition. This study also reviews the current and future trends and discusses how emotion identification using entropy as a measure to extract features, can accomplish enhanced identification when using EEG signal.


2021 ◽  
Author(s):  
Ashley E Symons ◽  
Adam Tierney

Speech perception requires the integration of evidence from acoustic cues across multiple dimensions. Individuals differ in their cue weighting strategies, i.e. the weight they assign to different acoustic dimensions during speech categorization. In two experiments, we investigate musical training as one potential predictor of individual differences in prosodic cue weighting strategies. Attentional theories of speech categorization suggest that prior experience with the task-relevance of a particular acoustic dimensions leads that dimension to attract attention. Therefore, Experiment 1 tested whether musicians and non-musicians differed in their ability to selectively attend to pitch and loudness in speech. Compared to non-musicians, musicians showed enhanced dimension-selective attention to pitch but not loudness. In Experiment 2, we tested the hypothesis that musicians would show greater pitch weighting during prosodic categorization due to prior experience with the task-relevance of pitch cues in music. In this experiment, listeners categorized phrases that varied in the extent to which pitch and duration signaled the location of linguistic focus and phrase boundaries. During linguistic focus categorization only, musicians up-weighted pitch compared to non-musicians. These results suggest that musical training is linked with domain-general enhancements of the salience of pitch cues, and that this increase in pitch salience may lead to to an up-weighting of pitch during some prosodic categorization tasks. These findings also support attentional theories of cue weighting, in which more salient acoustic dimensions are given more importance during speech categorization.


Author(s):  
Eric D. Young ◽  
Donata Oertel

Neuronal circuits in the brainstem convert the output of the ear, which carries the acoustic properties of ongoing sound, to a representation of the acoustic environment that can be used by the thalamocortical system. Most important, brainstem circuits reflect the way the brain uses acoustic cues to determine where sounds arise and what they mean. The circuits merge the separate representations of sound in the two ears and stabilize them in the face of disturbances such as loudness fluctuation or background noise. Embedded in these systems are some specialized analyses that are driven by the need to resolve tiny differences in the time and intensity of sounds at the two ears and to resolve rapid temporal fluctuations in sounds like the sequence of notes in music or the sequence of syllables in speech.


2020 ◽  
Vol 6 (30) ◽  
pp. eaba7830
Author(s):  
Laurianne Cabrera ◽  
Judit Gervain

Speech perception is constrained by auditory processing. Although at birth infants have an immature auditory system and limited language experience, they show remarkable speech perception skills. To assess neonates’ ability to process the complex acoustic cues of speech, we combined near-infrared spectroscopy (NIRS) and electroencephalography (EEG) to measure brain responses to syllables differing in consonants. The syllables were presented in three conditions preserving (i) original temporal modulations of speech [both amplitude modulation (AM) and frequency modulation (FM)], (ii) both fast and slow AM, but not FM, or (iii) only the slowest AM (<8 Hz). EEG responses indicate that neonates can encode consonants in all conditions, even without the fast temporal modulations, similarly to adults. Yet, the fast and slow AM activate different neural areas, as shown by NIRS. Thus, the immature human brain is already able to decompose the acoustic components of speech, laying the foundations of language learning.


2019 ◽  
Vol 14 (1) ◽  
pp. 17-33 ◽  
Author(s):  
Eun Cho

This study addresses the issue of sensitive periods – a developmental window when experience or stimulation has unusually strong and long-lasting impacts on certain areas of brain development and thus behaviour (Bailey and Penhune 2012) – for music training from a neurological perspective. Are there really sensitive periods in which early musical training has greater effects on the brain and behaviour than training later in life? Many neuroscience studies support the idea that beginning music training before the age of 7 is advantageous in many developmental aspects, based on their findings that early onset of music training is closely associated with enhanced structural and functional plasticity in visual-, auditory-, somatosensory- and motor-related regions of the brain. Although these studies help early childhood music educators expand understanding of the potential benefits of early music training, they often mislead us to believe that early onset is simply better. Careful consideration on details of these research studies should be given when we apply these research findings into practice. In this regard, this study provides a review of neuroscience studies related to the issue of sensitive periods for childhood music training and discusses how early childhood music educators could properly apply these findings to their music teaching practice.


2011 ◽  
Vol 12 (1) ◽  
pp. 77-77
Author(s):  
Sharpley Hsieh ◽  
Olivier Piguet ◽  
John R. Hodges

AbstractIntroduction: Frontotemporal dementia (FTD) is a progressive neurode-generative brain disease characterised clinically by abnormalities in behaviour, cognition and language. Two subgroups, behavioural-variant FTD (bvFTD) and semantic dementia (SD), also show impaired emotion recognition particularly for negative emotions. This deficit has been demonstrated using visual stimuli such as facial expressions. Whether recognition of emotions conveyed through other modalities — for example, music — is also impaired has not been investigated. Methods: Patients with bvFTD, SD and Alzheimer's disease (AD), as well as healthy age-matched controls, labeled tunes according to the emotion conveyed (happy, sad, peaceful or scary). In addition, each tune was also rated along two orthogonal emotional dimensions: valence (pleasant/unpleasant) and arousal (stimulating/relaxing). Participants also undertook a facial emotion recognition test and other cognitive tests. Integrity of basic music detection (tone, tempo) was also examined. Results: Patient groups were matched for disease severity. Overall, patients did not differ from controls with regard to basic music processing or for the recognition of facial expressions. Ratings of valence and arousal were similar across groups. In contrast, SD patients were selectively impaired at recognising music conveying negative emotions (sad and scary). Patients with bvFTD did not differ from controls. Conclusion: Recognition of emotions in music appears to be selectively affected in some FTD subgroups more than others, a disturbance of emotion detection which appears to be modality specific. This finding suggests dissociation in the neural networks necessary for the processing of emotions depending on modality.


Sign in / Sign up

Export Citation Format

Share Document