scholarly journals Neural and Behavioral Evidence for Frequency-Selective Context Effects in Rhythm Processing in Humans

2020 ◽  
Vol 1 (1) ◽  
Author(s):  
Tomas Lenc ◽  
Peter E Keller ◽  
Manuel Varlet ◽  
Sylvie Nozaradan

Abstract When listening to music, people often perceive and move along with a periodic meter. However, the dynamics of mapping between meter perception and the acoustic cues to meter periodicities in the sensory input remain largely unknown. To capture these dynamics, we recorded the electroencephalography while nonmusician and musician participants listened to nonrepeating rhythmic sequences, where acoustic cues to meter frequencies either gradually decreased (from regular to degraded) or increased (from degraded to regular). The results revealed greater neural activity selectively elicited at meter frequencies when the sequence gradually changed from regular to degraded compared with the opposite. Importantly, this effect was unlikely to arise from overall gain, or low-level auditory processing, as revealed by physiological modeling. Moreover, the context effect was more pronounced in nonmusicians, who also demonstrated facilitated sensory-motor synchronization with the meter for sequences that started as regular. In contrast, musicians showed weaker effects of recent context in their neural responses and robust ability to move along with the meter irrespective of stimulus degradation. Together, our results demonstrate that brain activity elicited by rhythm does not only reflect passive tracking of stimulus features, but represents continuous integration of sensory input with recent context.

2021 ◽  
Author(s):  
Karli M Nave ◽  
Erin Hannon ◽  
Joel S. Snyder

Synchronization of movement to music is a seemingly universal human capacity that depends on sustained beat perception. Previous research shows that the frequency of the beat can be observed in the neural activity of the listener. However, the extent to which these neural responses reflect concurrent, conscious perception of musical beat versus stimulus-driven activity is a matter of debate. We investigated whether this kind of periodic brain activity, measured using electroencephalography (EEG), reflects perception of beat, by holding the stimulus constant while manipulating the listener’s perception. Listeners with minimal music training heard a musical excerpt that strongly supported one of two beat patterns (context), followed by a rhythm consistent with either beat pattern (ambiguous phase). During the final phase, listeners indicated whether or not a superimposed drum matched the perceived beat (probe phase). Participants were more likely to indicate that the probe matched the music when that probe matched the original context, suggesting an ability to maintain the beat percept through the ambiguous phase. Likewise, we observed that the spectral amplitude during the ambiguous phase was higher at frequencies corresponding to the beat of the preceding context, and the EEG amplitude at the beat-related frequency predicted performance on the beat induction task on a single-trial basis. Together, these findings provide evidence that auditory cortical activity reflects conscious perception of musical beat and not just stimulus features or effortful attention.


2019 ◽  
Author(s):  
Tomas Lenc ◽  
Peter E. Keller ◽  
Manuel Varlet ◽  
Sylvie Nozaradan

AbstractWhen listening to musical rhythm, people tend to spontaneously perceive and move along with a periodic pulse-like meter. Moreover, perception and entrainment to the meter show remarkable stability in the face of dynamically changing rhythmic structure of music, even when acoustic cues to meter frequencies are degraded in the rhythmic input. Here we show that this perceptual phenomenon is supported by a selective synchronization of endogenous brain activity to the perceived meter, and that this neural synchronization is significantly shaped by recent context, especially when the incoming input becomes increasingly ambiguous. We recorded the EEG while non-musician and musician participants listened to nonrepeating rhythmic sequences where acoustic cues to meter frequencies either gradually decreased (from regular to ambiguous) or increased (from ambiguous to regular). We observed that neural activity selectively synchronized to the perceived meter persisted longer when the sequence gradually changed from regular to ambiguous compared to the opposite, thus demonstrating hysteresis in the neural processing of a dynamically changing rhythmic stimulus. This dependence on recent context was weaker in the neural responses of musicians, who also showed greater ability to tap along with a regular meter irrespective of stimulus ambiguity, thus reflecting greater stability relative to current and recent stimulus in musicians. Together, these asymmetric context effects demonstrate how the relative contribution of incoming and prior signals is continuously weighted to shape neural selection of functionally-relevant features and guide perceptual organization of dynamic input.Significance statementWhen listening to musical rhythm, people tend to spontaneously perceive and move along with a periodic pulse-like meter. Moreover, perception and entrainment to the meter seem to show remarkable stability in the face of dynamically changing rhythmic structure of music. Here we show that this is supported by a selective synchronization of brain activity at meter frequencies. This selective neural synchronization persists longer when a nonrepeating sequence gradually transforms from a regular to an ambiguous rhythm compared to the opposite. This asymmetric context effect suggests that the brain processes rhythm based on a flexible combination of sensory and endogenous information. Such continuously updated neural emphasis on meter periodicities might therefore guide robust perceptual organization of a dynamic rhythmic input.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Laura Bechtold ◽  
Christian Bellebaum ◽  
Paul Hoffman ◽  
Marta Ghio

AbstractThis study aimed to replicate and validate concreteness and context effects on semantic word processing. In Experiment 1, we replicated the behavioral findings of Hoffman et al. (Cortex 63,250–266, https://doi.org/10.1016/j.cortex.2014.09.001, 2015) by applying their cueing paradigm with their original stimuli translated into German. We found concreteness and contextual cues to facilitate word processing in a semantic judgment task with 55 healthy adults. The two factors interacted in their effect on reaction times: abstract word processing profited more strongly from a contextual cue, while the concrete words’ processing advantage was reduced but still present. For accuracy, the descriptive pattern of results suggested an interaction, which was, however, not significant. In Experiment 2, we reformulated the contextual cues to avoid repetition of the to-be-processed word. In 83 healthy adults, the same pattern of results emerged, further validating the findings. Our corroborating evidence supports theories integrating representational richness and semantic control mechanisms as complementary mechanisms in semantic word processing.


2010 ◽  
Vol 21 (7) ◽  
pp. 931-937 ◽  
Author(s):  
C. Nathan DeWall ◽  
Geoff MacDonald ◽  
Gregory D. Webster ◽  
Carrie L. Masten ◽  
Roy F. Baumeister ◽  
...  

Pain, whether caused by physical injury or social rejection, is an inevitable part of life. These two types of pain—physical and social—may rely on some of the same behavioral and neural mechanisms that register pain-related affect. To the extent that these pain processes overlap, acetaminophen, a physical pain suppressant that acts through central (rather than peripheral) neural mechanisms, may also reduce behavioral and neural responses to social rejection. In two experiments, participants took acetaminophen or placebo daily for 3 weeks. Doses of acetaminophen reduced reports of social pain on a daily basis (Experiment 1). We used functional magnetic resonance imaging to measure participants’ brain activity (Experiment 2), and found that acetaminophen reduced neural responses to social rejection in brain regions previously associated with distress caused by social pain and the affective component of physical pain (dorsal anterior cingulate cortex, anterior insula). Thus, acetaminophen reduces behavioral and neural responses associated with the pain of social rejection, demonstrating substantial overlap between social and physical pain.


Author(s):  
Luodi Yu ◽  
Jiajing Zeng ◽  
Suiping Wang ◽  
Yang Zhang

Purpose This study aimed to examine whether abstract knowledge of word-level linguistic prosody is independent of or integrated with phonetic knowledge. Method Event-related potential (ERP) responses were measured from 18 adult listeners while they listened to native and nonnative word-level prosody in speech and in nonspeech. The prosodic phonology (speech) conditions included disyllabic pseudowords spoken in Chinese and in English matched for syllabic structure, duration, and intensity. The prosodic acoustic (nonspeech) conditions were hummed versions of the speech stimuli, which eliminated the phonetic content while preserving the acoustic prosodic features. Results We observed language-specific effects on the ERP that native stimuli elicited larger late negative response (LNR) amplitude than nonnative stimuli in the prosodic phonology conditions. However, no such effect was observed in the phoneme-free prosodic acoustic control conditions. Conclusions The results support the integration view that word-level linguistic prosody likely relies on the phonetic content where the acoustic cues embedded in. It remains to be examined whether the LNR may serve as a neural signature for language-specific processing of prosodic phonology beyond auditory processing of the critical acoustic cues at the suprasyllabic level.


2019 ◽  
Author(s):  
S. A. Herff ◽  
C. Herff ◽  
A. J. Milne ◽  
G. D. Johnson ◽  
J. J. Shih ◽  
...  

AbstractRhythmic auditory stimuli are known to elicit matching activity patterns in neural populations. Furthermore, recent research has established the particular importance of high-gamma brain activity in auditory processing by showing its involvement in auditory phrase segmentation and envelope-tracking. Here, we use electrocorticographic (ECoG) recordings from eight human listeners, to see whether periodicities in high-gamma activity track the periodicities in the envelope of musical rhythms during rhythm perception and imagination. Rhythm imagination was elicited by instructing participants to imagine the rhythm to continue during pauses of several repetitions. To identify electrodes whose periodicities in high-gamma activity track the periodicities in the musical rhythms, we compute the correlation between the autocorrelations (ACC) of both the musical rhythms and the neural signals. A condition in which participants listened to white noise was used to establish a baseline. High-gamma autocorrelations in auditory areas in the superior temporal gyrus and in frontal areas on both hemispheres significantly matched the autocorrelation of the musical rhythms. Overall, numerous significant electrodes are observed on the right hemisphere. Of particular interest is a large cluster of electrodes in the right prefrontal cortex that is active during both rhythm perception and imagination. This indicates conscious processing of the rhythms’ structure as opposed to mere auditory phenomena. The ACC approach clearly highlights that high-gamma activity measured from cortical electrodes tracks both attended and imagined rhythms.


2021 ◽  
Author(s):  
Shannon L.M. Heald ◽  
Stephen C. Van Hedger ◽  
John Veillette ◽  
Katherine Reis ◽  
Joel S. Snyder ◽  
...  

AbstractThe ability to generalize rapidly across specific experiences is vital for robust recognition of new patterns, especially in speech perception considering acoustic-phonetic pattern variability. Behavioral research has demonstrated that listeners are rapidly able to generalize their experience with a talker’s speech and quickly improve understanding of a difficult-to-understand talker without prolonged practice, e.g., even after a single training session. Here, we examine the differences in neural responses to generalized versus rote learning in auditory cortical processing by training listeners to understand a novel synthetic talker using a Pretest-Posttest design with electroencephalography (EEG). Participants were trained using either (1) a large inventory of words where no words repeated across the experiment (generalized learning) or (2) a small inventory of words where words repeated (rote learning). Analysis of long-latency auditory evoked potentials at Pretest and Posttest revealed that while rote and generalized learning both produce rapid changes in auditory processing, the nature of these changes differed. In the context of adapting to a talker, generalized learning is marked by an amplitude reduction in the N1-P2 complex and by the presence of a late-negative (LN) wave in the auditory evoked potential following training. Rote learning, however, is marked only by temporally later source configuration changes. The early N1-P2 change, found only for generalized learning, suggests that generalized learning relies on the attentional system to reorganize the way acoustic features are selectively processed. This change in relatively early sensory processing (i.e. during the first 250ms) is consistent with an active processing account of speech perception, which proposes that the ability to rapidly adjust to the specific vocal characteristics of a new talker (for which rote learning is rare) relies on attentional mechanisms to adaptively tune early auditory processing sensitivity.Statement of SignificancePrevious research on perceptual learning has typically examined neural responses during rote learning: training and testing is carried out with the same stimuli. As a result, it is not clear that findings from these studies can explain learning that generalizes to novel patterns, which is critical in speech perception. Are neural responses to generalized learning in auditory processing different from neural responses to rote learning? Results indicate rote learning of a particular talker’s speech involves brain regions focused on the memory encoding and retrieving of specific learned patterns, whereas generalized learning involves brain regions involved in reorganizing attention during early sensory processing. In learning speech from a novel talker, only generalized learning is marked by changes in the N1-P2 complex (reflective of secondary auditory cortical processing). The results are consistent with the view that robust speech perception relies on the fast adjustment of attention mechanisms to adaptively tune auditory sensitivity to cope with acoustic variability.


2008 ◽  
Vol 39 (2) ◽  
pp. 255-265 ◽  
Author(s):  
J. Barrett ◽  
J. L. Armony

BackgroundWe examined how individual differences in trait anxiety (TA) influence the neural responses associated with the acquisition and extinction of anticipatory anxiety elicited through a context conditioning paradigm, with particular focus on the amygdala and the subgenual anterior cingulate cortex (sgACC).MethodDuring two sessions of echo-planar functional magnetic resonance imaging (fMRI), 18 healthy volunteers completed a decision-making task with two randomly alternating 28-s to 32-s background screen colour blocks. One of the colours was associated with the presentation of an aversive noise (CTX+) and the other colour was ‘safe’ (CTX−). In the first session (Acquisition), 33% of CTX+ colour blocks were paired with noise and in the second session (Extinction) no noise was presented.ResultsThe amygdala displayed an increased response to CTX+ compared to CTX− colour blocks during the Acquisition and Extinction sessions and the ACC displayed an increased response to CTX+ compared to CTX− colour blocks during Extinction only. In addition, a greater conditioned response (CTX+ minus CTX−) was observed in the ACC when comparing the Extinction and Acquisition sessions. Correlation analyses further showed that higher levels of TA were associated with a higher conditioned response in the amygdala during Extinction as well as a greater differential conditioned response (i.e. Extinction>Acquisition) in the ACC.ConclusionsOur results support the idea that individuals with high levels of anxiety-relevant traits and vulnerable to developing an anxiety disorder display a more resilient anxiety response during extinction that is characterized by hyper-responsivity in the amygdala.


Author(s):  
Cristina Trentini ◽  
Marco Pagani ◽  
Marco Lauriola ◽  
Renata Tambelli

Neuroscientific research has largely investigated the neurobiological correlates of maternal and (to a much lesser extent) paternal responsiveness in the post-partum period. In contrast, much less is known about the neural processing of infant emotions during pregnancy. Twenty mothers and 19 fathers were recruited independently during the third trimester of pregnancy. High-density electroencephalography (hdEEG) was recorded while expectant parents passively viewed images representing distressed, ambiguous, happy, and neutral faces of unknown infants. Correlational analyses were performed to detect a link between neural responses to infant facial expressions and emotional self-awareness. In response to infant emotions, mothers and fathers showed similar cerebral activity in regions involved in high-order socio-affective processes. Mothers and fathers also showed different brain activity in premotor regions implicated in high-order motor control, in occipital regions involved in visuo-spatial information processing and visual mental imagery, as well as in inferior parietal regions involved in attention allocation. Low emotional self-awareness negatively correlated with activity in parietal regions subserving empathy in mothers, while it positively correlated with activity in temporal and occipital areas implicated in mentalizing and visual mental imagery in fathers. This study may enlarge knowledge on the neural response to infant emotions during pregnancy.


2020 ◽  
Vol 6 (30) ◽  
pp. eaba7830
Author(s):  
Laurianne Cabrera ◽  
Judit Gervain

Speech perception is constrained by auditory processing. Although at birth infants have an immature auditory system and limited language experience, they show remarkable speech perception skills. To assess neonates’ ability to process the complex acoustic cues of speech, we combined near-infrared spectroscopy (NIRS) and electroencephalography (EEG) to measure brain responses to syllables differing in consonants. The syllables were presented in three conditions preserving (i) original temporal modulations of speech [both amplitude modulation (AM) and frequency modulation (FM)], (ii) both fast and slow AM, but not FM, or (iii) only the slowest AM (<8 Hz). EEG responses indicate that neonates can encode consonants in all conditions, even without the fast temporal modulations, similarly to adults. Yet, the fast and slow AM activate different neural areas, as shown by NIRS. Thus, the immature human brain is already able to decompose the acoustic components of speech, laying the foundations of language learning.


Sign in / Sign up

Export Citation Format

Share Document