scholarly journals The impact of music on auditory and speech processing

Author(s):  
Abdollah Moossavi ◽  
Nasrin Gohari

Background and Aim: Researchers in the fields of psychoacoustic and electrophysiology are mostly focused on demonstrating the better and different neurophysiological performance of musicians. The present study explores the imp­act of music upon the auditory system, the non-auditory system as well as the improvement of language and cognitive skills following listening to music or receiving music training. Recent Findings: Studies indicate the impact of music upon the auditory processing from the cochlea to secondary auditory cortex and other parts of the brain. Besides, the impact of music on speech perception and other cognitive proce­ssing is demonstrated. Some papers point to the bottom-up and some others to the top-down pro­cessing, which is explained in detail. Conclusion: Listening to music and receiving music training, in the long run, creates plasticity from the cochlea to the auditory cortex. Since the auditory path of musical sounds overlaps functionally with that of speech path, music hel­ps better speech perception, too. Both percep­tual and cognitive functions are involved in this process. Music engages a large area of the brain, so music can be used as a supplement in rehabi­litation programs and helps the improvement of speech and language skills.

2019 ◽  
Author(s):  
Jérémy Giroud ◽  
Agnès Trébuchon ◽  
Daniele Schön ◽  
Patrick Marquis ◽  
Catherine Liegeois-Chauvel ◽  
...  

AbstractSpeech perception is mediated by both left and right auditory cortices, but with differential sensitivity to specific acoustic information contained in the speech signal. A detailed description of this functional asymmetry is missing, and the underlying models are widely debated. We analyzed cortical responses from 96 epilepsy patients with electrode implantation in left or right primary, secondary, and/or association auditory cortex. We presented short acoustic transients to reveal the stereotyped spectro-spatial oscillatory response profile of the auditory cortical hierarchy. We show remarkably similar bimodal spectral response profiles in left and right primary and secondary regions, with preferred processing modes in the theta (∼4-8 Hz) and low gamma (∼25-50 Hz) ranges. These results highlight that the human auditory system employs a two-timescale processing mode. Beyond these first cortical levels of auditory processing, a hemispheric asymmetry emerged, with delta and beta band (∼3/15 Hz) responsivity prevailing in the right hemisphere and theta and gamma band (∼6/40 Hz) activity in the left. These intracranial data provide a more fine-grained and nuanced characterization of cortical auditory processing in the two hemispheres, shedding light on the neural dynamics that potentially shape auditory and speech processing at different levels of the cortical hierarchy.Author summarySpeech processing is now known to be distributed across the two hemispheres, but the origin and function of lateralization continues to be vigorously debated. The asymmetric sampling in time (AST) hypothesis predicts that (1) the auditory system employs a two-timescales processing mode, (2) present in both hemispheres but with a different ratio of fast and slow timescales, (3) that emerges outside of primary cortical regions. Capitalizing on intracranial data from 96 epileptic patients we sensitively validated each of these predictions and provide a precise estimate of the processing timescales. In particular, we reveal that asymmetric sampling in associative areas is subtended by distinct two-timescales processing modes. Overall, our results shed light on the neurofunctional architecture of cortical auditory processing.


Author(s):  
Behieh Kohansal ◽  
Mehdi Asghari ◽  
Sirvan Najafi ◽  
Fahimeh Hamedi

Background and Aim: Tinnitus is one of the most difficult challenges in audiology and oto­logy. Previous studies have been shown that tinn­itus may interfere with the function of central auditory system (CAS). Involvement of CAS abilities including speech perception and audi­tory processing has led to serious problems in people with tinnitus. Due to the lack of enough information about the impact of tinnitus on CAS and its function, and given that there is no standardized protocol for assessment and mana­gement of tinnitus, this study aimed to review the studies on the effect of tinnitus on the CAS function. Recent Findings: Sixteen eligible articles were reviewed. Temporal and spectral resolution, fre­quency differentiation and speech perception deficits were reported in patients with tinnitus, especially in background noise. This was repor­ted even in tinnitus patients with normal hearing. Conclusion: Assessment of central auditory pro­cessing and speech perception in noise seems to be useful for proper management of tinnitus in clinical practice. Keywords: Tinnitus; auditory system; central auditory processing; speech in noise performance  


2019 ◽  
Author(s):  
Yamil Vidal ◽  
Perrine Brusini ◽  
Michela Bonfieni ◽  
Jacques Mehler ◽  
Tristan Bekinschtein

AbstractAs the evidence of predictive processes playing a role in a wide variety of cognitive domains increases, the brain as a predictive machine becomes a central idea in neuroscience. In auditory processing a considerable amount of progress has been made using variations of the Oddball design, but most of the existing work seems restricted to predictions based on physical features or conditional rules linking successive stimuli. To characterise the predictive capacity of the brain to abstract rules, we present here two experiments that use speech-like stimuli to overcome limitations and avoid common confounds. Pseudowords were presented in isolation, intermixed with infrequent deviants that contained unexpected phoneme sequences. As hypothesized, the occurrence of unexpected sequences of phonemes reliably elicited an early prediction error signal. These prediction error signals do not seemed to be modulated by attentional manipulations due to different task instructions, suggesting that the predictions are deployed even when the task at hand does not volitionally involve error detection. In contrast, the amount of syllables congruent with a standard pseudoword presented before the point of deviance exerted a strong modulation. Prediction error’s amplitude doubled when two congruent syllables were presented instead of one, despite keeping local transitional probabilities constant. This suggest that auditory predictions can be built integrating information beyond the immediate past. In sum, the results presented here further contribute to the understanding of the predictive capabilities of the human auditory system when facing complex stimuli and abstract rules.Significance StatementThe generation of predictions seem to be a prevalent brain computation. In the case of auditory processing this information is intrinsically temporal. The study of auditory predictions has been largely circumscribed to unexpected physical stimuli features or rules connecting consecutive stimuli. In contrast, our everyday experience suggest that the human auditory system is capable of more sophisticated predictions. This becomes evident in the case of speech processing, where abstract rules with long range dependencies are universal. In this article, we present two electroencephalography experiments that use speech-like stimuli to explore the predictive capabilities of the human auditory system. The results presented here increase the understanding of the ability of our auditory system to implement predictions using information beyond the immediate past.


Author(s):  
Josef P. Rauschecker

When one talks about hearing, some may first imagine the auricle (or external ear), which is the only visible part of the auditory system in humans and other mammals. Its shape and size vary among people, but it does not tell us much about a person’s abilities to hear (except perhaps their ability to localize sounds in space, where the shape of the auricle plays a certain role). Most of what is used for hearing is inside the head, particularly in the brain. The inner ear transforms mechanical vibrations into electrical signals; then the auditory nerve sends these signals into the brainstem, where intricate preprocessing occurs. Although auditory brainstem mechanisms are an important part of central auditory processing, it is the processing taking place in the cerebral cortex (with the thalamus as the mediator), which enables auditory perception and cognition. Human speech and the appreciation of music can hardly be imagined without a complex cortical network of specialized regions, each contributing different aspects of auditory cognitive abilities. During the evolution of these abilities in higher vertebrates, especially birds and mammals, the cortex played a crucial role, so a great deal of what is referred to as central auditory processing happens there. Whether it is the recognition of one’s mother’s voice, listening to Pavarotti singing or Yo-Yo Ma playing the cello, hearing or reading Shakespeare’s sonnets, it will evoke electrical vibrations in the auditory cortex, but it does not end there. Large parts of frontal and parietal cortex receive auditory signals originating in auditory cortex, forming processing streams for auditory object recognition and auditory-motor control, before being channeled into other parts of the brain for comprehension and enjoyment.


2020 ◽  
Vol 6 (30) ◽  
pp. eaba7830
Author(s):  
Laurianne Cabrera ◽  
Judit Gervain

Speech perception is constrained by auditory processing. Although at birth infants have an immature auditory system and limited language experience, they show remarkable speech perception skills. To assess neonates’ ability to process the complex acoustic cues of speech, we combined near-infrared spectroscopy (NIRS) and electroencephalography (EEG) to measure brain responses to syllables differing in consonants. The syllables were presented in three conditions preserving (i) original temporal modulations of speech [both amplitude modulation (AM) and frequency modulation (FM)], (ii) both fast and slow AM, but not FM, or (iii) only the slowest AM (<8 Hz). EEG responses indicate that neonates can encode consonants in all conditions, even without the fast temporal modulations, similarly to adults. Yet, the fast and slow AM activate different neural areas, as shown by NIRS. Thus, the immature human brain is already able to decompose the acoustic components of speech, laying the foundations of language learning.


2014 ◽  
Vol 369 (1651) ◽  
pp. 20130297 ◽  
Author(s):  
Jeremy I. Skipper

What do we hear when someone speaks and what does auditory cortex (AC) do with that sound? Given how meaningful speech is, it might be hypothesized that AC is most active when other people talk so that their productions get decoded. Here, neuroimaging meta-analyses show the opposite: AC is least active and sometimes deactivated when participants listened to meaningful speech compared to less meaningful sounds. Results are explained by an active hypothesis-and-test mechanism where speech production (SP) regions are neurally re-used to predict auditory objects associated with available context. By this model, more AC activity for less meaningful sounds occurs because predictions are less successful from context, requiring further hypotheses be tested. This also explains the large overlap of AC co-activity for less meaningful sounds with meta-analyses of SP. An experiment showed a similar pattern of results for non-verbal context. Specifically, words produced less activity in AC and SP regions when preceded by co-speech gestures that visually described those words compared to those words without gestures. Results collectively suggest that what we ‘hear’ during real-world speech perception may come more from the brain than our ears and that the function of AC is to confirm or deny internal predictions about the identity of sounds.


2021 ◽  
Author(s):  
Luis M. Rivera-Perez ◽  
Julia T. Kwapiszewski ◽  
Michael T. Roberts

AbstractThe inferior colliculus (IC), the midbrain hub of the central auditory system, receives extensive cholinergic input from the pontomesencephalic tegmentum. Activation of nicotinic acetylcholine receptors (nAChRs) in the IC can alter acoustic processing and enhance auditory task performance. However, how nAChRs affect the excitability of specific classes of IC neurons remains unknown. Recently, we identified vasoactive intestinal peptide (VIP) neurons as a distinct class of glutamatergic principal neurons in the IC. Here, in experiments using male and female mice, we show that cholinergic terminals are routinely located adjacent to the somas and dendrites of VIP neurons. Using whole-cell electrophysiology in brain slices, we found that acetylcholine drives surprisingly strong and long-lasting excitation and inward currents in VIP neurons. This excitation was unaffected by the muscarinic receptor antagonist atropine. Application of nAChR antagonists revealed that acetylcholine excites VIP neurons mainly via activation of α3β4* nAChRs, a nAChR subtype that is rare in the brain. Furthermore, we show that cholinergic excitation is intrinsic to VIP neurons and does not require activation of presynaptic inputs. Lastly, we found that low frequency trains of acetylcholine puffs elicited temporal summation in VIP neurons, suggesting that in vivo-like patterns of cholinergic input can reshape activity for prolonged periods. These results reveal the first cellular mechanisms of nAChR regulation in the IC, identify a functional role for α3β4* nAChRs in the auditory system, and suggest that cholinergic input can potently influence auditory processing by increasing excitability in VIP neurons and their postsynaptic targets.Key points summaryThe inferior colliculus (IC), the midbrain hub of the central auditory system, receives extensive cholinergic input and expresses a variety of nicotinic acetylcholine receptor (nAChR) subunits.In vivo activation of nAChRs alters the input-output functions of IC neurons and influences performance in auditory tasks. However, how nAChR activation affects the excitability of specific IC neuron classes remains unknown.Here we show in mice that cholinergic terminals are located adjacent to the somas and dendrites of VIP neurons, a class of IC principal neurons.We find that acetylcholine elicits surprisingly strong, long-lasting excitation of VIP neurons and this is mediated mainly through activation of α3β4* nAChRs, a subtype that is rare in the brain.Our data identify a role for α3β4* nAChRs in the central auditory pathway and reveal a mechanism by which cholinergic input can influence auditory processing in the IC and the postsynaptic targets of VIP neurons.


2020 ◽  
Author(s):  
Tulio Guadalupe ◽  
Xiang-Zhen Kong ◽  
Sophie E. A. Akkermans ◽  
Simon E. Fisher ◽  
Clyde Francks

AbstractMost people have a right-ear advantage for the perception of spoken syllables, consistent with left hemisphere dominance for speech processing. However, there is considerable variation, with some people showing left-ear advantage. The extent to which this variation is reflected in brain structure remains unclear. We tested for relations between hemispheric asymmetries of auditory processing and of grey matter in 281 adults, using dichotic listening and voxel-based morphometry. This was the largest study of this issue to date. Per-voxel asymmetry indexes were derived for each participant following registration of brain magnetic resonance images to a template that was symmetrized. The asymmetry index derived from dichotic listening was related to grey matter asymmetry in clusters of voxels corresponding to the amygdala and cerebellum lobule VI. There was also a smaller, non-significant cluster in the posterior superior temporal gyrus, a region of auditory cortex. These findings contribute to the mapping of asymmetrical structure-function links in the human brain, and suggest that subcortical structures should be investigated in relation to hemispheric dominance for speech processing, in addition to auditory cortex.


eLife ◽  
2021 ◽  
Vol 10 ◽  
Author(s):  
Magdalena Solyga ◽  
Tania Rinaldi Barkat

Offset responses in auditory processing appear after a sound terminates. They arise in neuronal circuits within the peripheral auditory system, but their role in the central auditory system remains unknown. Here, we ask what the behavioral relevance of cortical offset responses is and what circuit mechanisms drive them. At the perceptual level, our results reveal that experimentally minimizing auditory cortical offset responses decreases the mouse performance to detect sound termination, assigning a behavioral role to offset responses. By combining in vivo electrophysiology in the auditory cortex and thalamus of awake mice, we also demonstrate that cortical offset responses are not only inherited from the periphery but also amplified and generated de novo. Finally, we show that offset responses code more than silence, including relevant changes in sound trajectories. Together, our results reveal the importance of cortical offset responses in encoding sound termination and detecting changes within temporally discontinuous sounds crucial for speech and vocalization.


2007 ◽  
Vol 363 (1493) ◽  
pp. 1023-1035 ◽  
Author(s):  
Roy D Patterson ◽  
Ingrid S Johnsrude

In this paper, we describe domain-general auditory processes that we believe are prerequisite to the linguistic analysis of speech. We discuss biological evidence for these processes and how they might relate to processes that are specific to human speech and language. We begin with a brief review of (i) the anatomy of the auditory system and (ii) the essential properties of speech sounds. Section 4 describes the general auditory mechanisms that we believe are applied to all communication sounds, and how functional neuroimaging is being used to map the brain networks associated with domain-general auditory processing. Section 5 discusses recent neuroimaging studies that explore where such general processes give way to those that are specific to human speech and language.


Sign in / Sign up

Export Citation Format

Share Document