Pitch processing in the human brain is influenced by language experience

Neuroreport ◽  
1998 ◽  
Vol 9 (9) ◽  
pp. 2115-2119 ◽  
Author(s):  
Jack Gandour ◽  
Donald Wong ◽  
Gary Hutchins
2014 ◽  
Vol 307 ◽  
pp. 53-64 ◽  
Author(s):  
Christopher J. Plack ◽  
Daphne Barker ◽  
Deborah A. Hall
Keyword(s):  

2020 ◽  
Vol 6 (30) ◽  
pp. eaba7830
Author(s):  
Laurianne Cabrera ◽  
Judit Gervain

Speech perception is constrained by auditory processing. Although at birth infants have an immature auditory system and limited language experience, they show remarkable speech perception skills. To assess neonates’ ability to process the complex acoustic cues of speech, we combined near-infrared spectroscopy (NIRS) and electroencephalography (EEG) to measure brain responses to syllables differing in consonants. The syllables were presented in three conditions preserving (i) original temporal modulations of speech [both amplitude modulation (AM) and frequency modulation (FM)], (ii) both fast and slow AM, but not FM, or (iii) only the slowest AM (<8 Hz). EEG responses indicate that neonates can encode consonants in all conditions, even without the fast temporal modulations, similarly to adults. Yet, the fast and slow AM activate different neural areas, as shown by NIRS. Thus, the immature human brain is already able to decompose the acoustic components of speech, laying the foundations of language learning.


2010 ◽  
Vol 23 (1) ◽  
pp. 81-95 ◽  
Author(s):  
Ananthanarayan Krishnan ◽  
Jackson T. Gandour ◽  
Gavin M. Bidelman

2020 ◽  
Author(s):  
Stephen McCullough ◽  
Karen Emmorey

We investigated, using voxel-based morphometry (VBM), how deafness and sign language experience affect the anatomical structures of the human brain by comparing gray matter (GM) and white matter (WM) structures across congenitally deaf native signers, hearing native signers, and hearing sign-naïve controls (n = 90). We also compared the same groups on cortical thickness, surface area, and local gyrification using surface-based morphometry (SBM). Both VBM and SBM results revealed deafness-related changes in visual cortices and right frontal lobe. The GM in the auditory cortices did not appear to be affected by deafness; however, there was a significant WM reduction in left Heschl's gyrus for deaf signers only. The SBM comparisons revealed changes associated with lifelong signing experience: expansions in the surface area within left anterior temporal and left occipital lobes, and a reduction in cortical thickness in the right occipital lobe for deaf and hearing signers. Structural changes within these brain regions may be related to adaptations in the neural networks involved in processing signed language (i.e., visual perception of face and body movements). Hearing native signers also had unique neuroanatomical changes (e.g., reduced gyrification in premotor areas), perhaps due to lifelong experience with both a spoken and a signed language.


2021 ◽  
Vol 12 ◽  
Author(s):  
Malathi Thothathiri

Whether sentences are formulated primarily using lexically based or non-lexically based information has been much debated. In this perspective article, I review evidence for rational flexibility in the sentence production architecture. Sentences can be constructed flexibly via lexically dependent or independent routes, and rationally depending on the statistical properties of the input and the validity of lexical vs. abstract cues for predicting sentence structure. Different neural pathways appear to be recruited for individuals with different executive function abilities and for verbs with different statistical properties, suggesting that alternative routes are available for producing the same structure. Together, extant evidence indicates that the human brain adapts to ongoing language experience during adulthood, and that the nature of the adjustment may depend rationally on the statistical contingencies of the current context.


2019 ◽  
Vol 224 (5) ◽  
pp. 1723-1738 ◽  
Author(s):  
Simon Leipold ◽  
Christian Brauchli ◽  
Marielle Greber ◽  
Lutz Jäncke

2006 ◽  
Vol 27 (2) ◽  
pp. 173-183 ◽  
Author(s):  
Yisheng Xu ◽  
Jackson Gandour ◽  
Thomas Talavage ◽  
Donald Wong ◽  
Mario Dzemidzic ◽  
...  

2016 ◽  
Vol 28 (12) ◽  
pp. 2044-2058 ◽  
Author(s):  
Stefanie Hutka ◽  
Sarah M. Carpentier ◽  
Gavin M. Bidelman ◽  
Sylvain Moreno ◽  
Anthony R. McIntosh

Musicianship has been associated with auditory processing benefits. It is unclear, however, whether pitch processing experience in nonmusical contexts, namely, speaking a tone language, has comparable associations with auditory processing. Studies comparing the auditory processing of musicians and tone language speakers have shown varying degrees of between-group similarity with regard to perceptual processing benefits and, particularly, nonlinguistic pitch processing. To test whether the auditory abilities honed by musicianship or speaking a tone language differentially impact the neural networks supporting nonlinguistic pitch processing (relative to timbral processing), we employed a novel application of brain signal variability (BSV) analysis. BSV is a metric of information processing capacity and holds great potential for understanding the neural underpinnings of experience-dependent plasticity. Here, we measured BSV in electroencephalograms of musicians, tone language-speaking nonmusicians, and English-speaking nonmusicians (controls) during passive listening of music and speech sound contrasts. Although musicians showed greater BSV across the board, each group showed a unique spatiotemporal distribution in neural network engagement: Controls had greater BSV for speech than music; tone language-speaking nonmusicians showed the opposite effect; musicians showed similar BSV for both domains. Collectively, results suggest that musical and tone language pitch experience differentially affect auditory processing capacity within the cerebral cortex. However, information processing capacity is graded: More experience with pitch is associated with greater BSV when processing this cue. Higher BSV in musicians may suggest increased information integration within the brain networks subserving speech and music, which may be related to their well-documented advantages on a wide variety of speech-related tasks.


Sign in / Sign up

Export Citation Format

Share Document