scholarly journals Slow phase-locked endogenous modulations support selective attention to sound

2021 ◽  
Author(s):  
Magdalena Kachlicka ◽  
Aeron Laffere ◽  
Fred Dick ◽  
Adam Tierney

AbstractTo make sense of complex soundscapes, listeners must select and attend to task-relevant streams while ignoring uninformative sounds. One possible neural mechanism underlying this process is alignment of endogenous oscillations with the temporal structure of the target sound stream. Such a mechanism has been suggested to mediate attentional modulation of neural phase-locking to the rhythms of attended sounds. However, such modulations are compatible with an alternate framework, where attention acts as a filter that enhances exogenously-driven neural auditory responses. Here we attempted to adjudicate between theoretical accounts by playing two tone steams varying across condition in tone duration and presentation rate; participants attended to one stream or listened passively. Attentional modulation of the evoked waveform was roughly sinusoidal and scaled with rate, while the passive response did not. This suggests that auditory attentional selection is carried out via phase-locking of slow endogenous neural rhythms.

2020 ◽  
Vol 32 (9) ◽  
pp. 1654-1671
Author(s):  
Melisa Menceloglu ◽  
Marcia Grabowecky ◽  
Satoru Suzuki

Sensory systems utilize temporal structure in the environment to build expectations about the timing of forthcoming events. We investigated the effects of rhythm-based temporal expectation on auditory responses measured with EEG recorded from the frontocentral sites implicated in auditory processing. By manipulating temporal expectation and the interonset interval (IOI) of tones, we examined how neural responses adapted to auditory rhythm and reacted to stimuli that violated the rhythm. Participants passively listened to the tones while watching a silent nature video. In Experiment 1 ( n = 22), in the long-IOI block, tones were frequently presented (80%) with 1.7-sec IOI and infrequently presented (20%) with 1.2-sec IOI, generating unexpectedly early tones that violated temporal expectation. Conversely, in the short-IOI block, tones were frequently presented with 1.2-sec IOI and infrequently presented with 1.7-sec IOI, generating late tones. We analyzed the tone-evoked N1–P2 amplitude of ERPs and intertrial phase clustering in the theta–alpha band. The results provided evidence of strong delay-dependent adaptation effects (short-term, sensitive to IOI), weak cumulative adaptation effects (long-term, driven by tone repetition over time), and robust temporal-expectation violation effects over and above the adaptation effects. Experiment 2 ( n = 22) repeated Experiment 1 with shorter IOIs of 1.2 and 0.7 sec. Overall, we found evidence of strong delay-dependent adaptation effects, weak cumulative adaptation effects (which may most efficiently accumulate at the tone presentation rate of ∼1 Hz), and robust temporal-expectation violation effects that substantially boost auditory responses to the extent of overriding the delay-dependent adaptation effects likely through mechanisms involved in exogenous attention.


2017 ◽  
Author(s):  
Victoria Leong ◽  
Elizabeth Byrne ◽  
Kaili Clackson ◽  
Naomi Harte ◽  
Sarah Lam ◽  
...  

ABSTRACTDuring their early years, infants use the temporal statistics of the speech signal to boot-strap language learning, but the neural mechanisms that facilitate this temporal analysis are poorly understood. In adults, neural oscillatory entrainment to the speech amplitude envelope has been proposed to be a mechanism for multi-time resolution analysis of adult-directed speech, with a focus on Theta (syllable) and low Gamma (phoneme) rates. However, it is not known whether developing infants perform multi-time oscillatory analysis of infant-directed speech with the same temporal focus. Here, we examined infants’ processing of the temporal structure of sung nursery rhymes, and compared their neural entrainment across multiple timescales with that of well-matched adults (their mothers). Typical infants and their mothers (N=58, median age 8.3 months) viewed videos of sung nursery rhymes while their neural activity at C3 and C4 was concurrently monitored using dual-electroencephalography (dual-EEG). The accuracy of infants’ and adults’ neural oscillatory entrainment to speech was compared by calculating their phase-locking values (PLVs) across the EEG-speech frequency spectrum. Infants showed better phase-locking than adults at Theta (~4.5 Hz)and Alpha (~9.3 Hz) rates, corresponding to rhyme and phoneme patterns in our stimuli. Infant entrainment levels matched adults’ for syllables and prosodic stress patterns (Delta,~1-2 Hz). By contrast, infants were less accurate than adults at tracking slow (~0.5 Hz) phrasal patterns. Therefore, compared to adults, language-learning infants’ temporal parsing of the speech signal shows highest relative acuity at Theta-Alpha rates. This temporal focus could support the accurate encoding of syllable and rhyme patterns during infants’ sensitive period for phonetic and phonotactic learning. Therefore, oscillatory entrainment could be one neural mechanism that supports early bootstrapping of language learning from infant-directed speech (such as nursery rhymes).


2011 ◽  
Vol 84 (1) ◽  
Author(s):  
Sungwoo Ahn ◽  
Choongseok Park ◽  
Leonid L. Rubchinsky

2019 ◽  
Author(s):  
Aeron Laffere ◽  
Fred Dick ◽  
Adam Tierney

AbstractHow does the brain follow a sound that is mixed with others in a noisy environment? A possible strategy is to allocate attention to task-relevant time intervals while suppressing irrelevant intervals - a strategy that could be implemented by aligning neural modulations with critical moments in time. Here we tested whether selective attention to non-verbal sound streams is linked to shifts in the timing of attentional modulations of EEG activity, and investigated whether this neural mechanism can be enhanced by short-term training and musical experience. Participants performed a memory task on a target auditory stream presented at 4 Hz while ignoring a distractor auditory stream also presented at 4 Hz, but with a 180-degree shift in phase. The two attention conditions were linked to a roughly 180-degree shift in phase in the EEG signal at 4 Hz. Moreover, there was a strong relationship between performance on the 1-back task and the timing of the EEG modulation with respect to the attended band. EEG modulation timing was also enhanced after several days of training on the selective attention task and enhanced in experienced musicians. These results support the hypothesis that modulation of neural timing facilitates attention to particular moments in time and indicate that phase timing is a robust and reliable marker of individual differences in auditory attention. Moreover, these results suggest that nonverbal selective attention can be enhanced in the short term by only a few hours of practice and in the long term by years of musical training.


1999 ◽  
Vol 82 (3) ◽  
pp. 1542-1559 ◽  
Author(s):  
Michael Brosch ◽  
Andreas Schulz ◽  
Henning Scheich

It is well established that the tone-evoked response of neurons in auditory cortex can be attenuated if another tone is presented several hundred milliseconds before. The present study explores in detail a complementary phenomenon in which the tone-evoked response is enhanced by a preceding tone. Action potentials from multiunit groups and single units were recorded from primary and caudomedial auditory cortical fields in lightly anesthetized macaque monkeys. Stimuli were two suprathreshold tones of 100-ms duration, presented in succession. The frequency of the first tone and the stimulus onset asynchrony (SOA) between the two tones were varied systematically, whereas the second tone was fixed. Compared with presenting the second tone in isolation, the response to the second tone was enhanced significantly when it was preceded by the first tone. This was observed in 87 of 130 multiunit groups and in 29 of 69 single units with no obvious difference between different auditory fields. Response enhancement occurred for a wide range of SOA (110–329 ms) and for a wide range of frequencies of the first tone. Most of the first tones that enhanced the response to the second tone evoked responses themselves. The stimulus, which on average produced maximal enhancement, was a pair with a SOA of 120 ms and with a frequency separation of about one octave. The frequency/SOA combinations that induced response enhancement were mostly different from the ones that induced response attenuation. Results suggest that response enhancement, in addition to response attenuation, provides a basic neural mechanism involved in the cortical processing of the temporal structure of sounds.


Author(s):  
David D. Nolte

Coupled linear oscillators provide a central paradigm for the combined behavior of coupled systems and the emergence of normal modes. Nonlinear coupling of two autonomous oscillators provides an equally important paradigm for the emergence of collective behavior through synchronization. Simple asymmetric coupling of integrate and fire oscillators captures the essence of frequency locking. Quasiperiodicity on the torus (action-angle oscillators) with nonlinear coupling demonstrates phase locking, while the sine-circle map is a discrete map that displays multiple Arnold tongues at frequency-locking resonances. External synchronization of a phase oscillator is analyzed in terms of the “slow” phase difference, resulting in a beat frequency and frequency entrainment that are functions of the coupling strength.


2021 ◽  
pp. 1-14
Author(s):  
Octave Etard ◽  
Rémy Ben Messaoud ◽  
Gabriel Gaugain ◽  
Tobias Reichenbach

Abstract Speech and music are spectrotemporally complex acoustic signals that are highly relevant for humans. Both contain a temporal fine structure that is encoded in the neural responses of subcortical and cortical processing centers. The subcortical response to the temporal fine structure of speech has recently been shown to be modulated by selective attention to one of two competing voices. Music similarly often consists of several simultaneous melodic lines, and a listener can selectively attend to a particular one at a time. However, the neural mechanisms that enable such selective attention remain largely enigmatic, not least since most investigations to date have focused on short and simplified musical stimuli. Here, we studied the neural encoding of classical musical pieces in human volunteers, using scalp EEG recordings. We presented volunteers with continuous musical pieces composed of one or two instruments. In the latter case, the participants were asked to selectively attend to one of the two competing instruments and to perform a vibrato identification task. We used linear encoding and decoding models to relate the recorded EEG activity to the stimulus waveform. We show that we can measure neural responses to the temporal fine structure of melodic lines played by one single instrument, at the population level as well as for most individual participants. The neural response peaks at a latency of 7.6 msec and is not measurable past 15 msec. When analyzing the neural responses to the temporal fine structure elicited by competing instruments, we found no evidence of attentional modulation. We observed, however, that low-frequency neural activity exhibited a modulation consistent with the behavioral task at latencies from 100 to 160 msec, in a similar manner to the attentional modulation observed in continuous speech (N100). Our results show that, much like speech, the temporal fine structure of music is tracked by neural activity. In contrast to speech, however, this response appears unaffected by selective attention in the context of our experiment.


1989 ◽  
Vol 100 (3) ◽  
pp. 177-186 ◽  
Author(s):  
Franklin S. Coale ◽  
Edward J. Walsh ◽  
JoAnn McGee ◽  
Horst R. Konrad

Evoked potentials produced by direct unilateral mechanical stimuiation of the cannulated horizontal semicircular canal were investigated parametrically in anesthetized adult cats (40 mg/kg pentobarbital). Stimuli were fluid pressure pulses in a closed hydraulic system (no net flow), which was coupled to the lateral semicircular canal near the ampulla. Hydraulic waveform output and fluid pressure was monitored In situ via a parallel hydraulic circuit during experiments. Maximum fluid displacement at the level of the horizontal canal was 0.025 microliters. The intensity, duration, and presentation rate of the stimulus were varied during experiments. Field potentials were recorded differentially using subdermal electrodes, with the active lead in the region of the mastoid referenced to a distant nasal site. A total of 256 trials was accumulated for each run using an averaging computer. Evoked responses were physiologically vulnerable and reproducible, with little variance among animals. Response amplitude Increased monotonically until saturation was noted and responses followed the temporal structure of the pressure wave. Polarity reversal with differir electrode placement suggests that the generator site lies within the mastoid. Buhner, intense broadband acoustic stimuli and eighth nerve sectioning did not affect the vestibular evoked potentials, but could be shown to abolish the auditory evoked potentials. Results of these experiments support the notion that vestibular evoked potentials are related to the first derivative of the pressure pulse waveforms. Future experiments will be directed toward the assessment of vestibular physiology and pharmacology with this evoked response method.


Sign in / Sign up

Export Citation Format

Share Document