No Evidence of Attentional Modulation of the Neural Response to the Temporal Fine Structure of Continuous Musical Pieces

2021 ◽  
pp. 1-14
Author(s):  
Octave Etard ◽  
Rémy Ben Messaoud ◽  
Gabriel Gaugain ◽  
Tobias Reichenbach

Abstract Speech and music are spectrotemporally complex acoustic signals that are highly relevant for humans. Both contain a temporal fine structure that is encoded in the neural responses of subcortical and cortical processing centers. The subcortical response to the temporal fine structure of speech has recently been shown to be modulated by selective attention to one of two competing voices. Music similarly often consists of several simultaneous melodic lines, and a listener can selectively attend to a particular one at a time. However, the neural mechanisms that enable such selective attention remain largely enigmatic, not least since most investigations to date have focused on short and simplified musical stimuli. Here, we studied the neural encoding of classical musical pieces in human volunteers, using scalp EEG recordings. We presented volunteers with continuous musical pieces composed of one or two instruments. In the latter case, the participants were asked to selectively attend to one of the two competing instruments and to perform a vibrato identification task. We used linear encoding and decoding models to relate the recorded EEG activity to the stimulus waveform. We show that we can measure neural responses to the temporal fine structure of melodic lines played by one single instrument, at the population level as well as for most individual participants. The neural response peaks at a latency of 7.6 msec and is not measurable past 15 msec. When analyzing the neural responses to the temporal fine structure elicited by competing instruments, we found no evidence of attentional modulation. We observed, however, that low-frequency neural activity exhibited a modulation consistent with the behavioral task at latencies from 100 to 160 msec, in a similar manner to the attentional modulation observed in continuous speech (N100). Our results show that, much like speech, the temporal fine structure of music is tracked by neural activity. In contrast to speech, however, this response appears unaffected by selective attention in the context of our experiment.

2021 ◽  
Author(s):  
Octave Etard ◽  
Rémy Ben Messaoud ◽  
Gabriel Gaugain ◽  
Tobias Reichenbach

AbstractSpeech and music are spectro-temporally complex acoustic signals that a highly relevant for humans. Both contain a temporal fine structure that is encoded in the neural responses of subcortical and cortical processing centres. The subcortical response to the temporal fine structure of speech has recently been shown to be modulated by selective attention to one of two competing voices. Music similarly often consists of several simultaneous melodic lines, and a listener can selectively attend to a particular one at a time. However, the neural mechanisms that enable such selective attention remain largely enigmatic, not least since most investigations to date have focussed on short and simplified musical stimuli. Here we study the neural encoding of classical musical pieces in human volunteers, using scalp electroencephalography (EEG) recordings. We presented volunteers with continuous musical pieces composed of one or two instruments. In the latter case, the participants were asked to selectively attend to one of the two competing instruments and to perform a vibrato identification task. We used linear encoding and decoding models to relate the recorded EEG activity to the stimulus waveform. We show that we can measure neural responses to the temporal fine structure of melodic lines played by one single instrument, at the population level as well as for most individual subjects. The neural response peaks at a latency of 7.6 ms and is not measurable past 15 ms. When analysing the neural responses elicited by competing instruments, we find no evidence of attentional modulation. Our results show that, much like speech, the temporal fine structure of music is tracked by neural activity. In contrast to speech, however, this response appears unaffected by selective attention in the context of our experiment.


PLoS ONE ◽  
2012 ◽  
Vol 7 (9) ◽  
pp. e45579 ◽  
Author(s):  
Tobias Reichenbach ◽  
A. J. Hudspeth

2020 ◽  
Vol 21 (6) ◽  
pp. 527-544
Author(s):  
H. C. Stronks ◽  
J. J. Briaire ◽  
J. H. M. Frijns

Abstract Cochlear implant (CI) users have more difficulty understanding speech in temporally modulated noise than in steady-state (SS) noise. This is thought to be caused by the limited low-frequency information that CIs provide, as well as by the envelope coding in CIs that discards the temporal fine structure (TFS). Contralateral amplification with a hearing aid, referred to as bimodal hearing, can potentially provide CI users with TFS cues to complement the envelope cues provided by the CI signal. In this study, we investigated whether the use of a CI alone provides access to only envelope cues and whether acoustic amplification can provide additional access to TFS cues. To this end, we evaluated speech recognition in bimodal listeners, using SS noise and two amplitude-modulated noise types, namely babble noise and amplitude-modulated steady-state (AMSS) noise. We hypothesized that speech recognition in noise depends on the envelope of the noise, but not on its TFS when listening with a CI. Secondly, we hypothesized that the amount of benefit gained by the addition of a contralateral hearing aid depends on both the envelope and TFS of the noise. The two amplitude-modulated noise types decreased speech recognition more effectively than SS noise. Against expectations, however, we found that babble noise decreased speech recognition more effectively than AMSS noise in the CI-only condition. Therefore, we rejected our hypothesis that TFS is not available to CI users. In line with expectations, we found that the bimodal benefit was highest in babble noise. However, there was no significant difference between the bimodal benefit obtained in SS and AMSS noise. Our results suggest that a CI alone can provide TFS cues and that bimodal benefits in noise depend on TFS, but not on the envelope of the noise.


2019 ◽  
Vol 62 (6) ◽  
pp. 2018-2034 ◽  
Author(s):  
Eric C. Hoover ◽  
Brianna N. Kinney ◽  
Karen L. Bell ◽  
Frederick J. Gallun ◽  
David A. Eddins

Purpose Growing evidence supports the inclusion of perceptual tests that quantify the processing of temporal fine structure (TFS) in clinical hearing assessment. Many tasks have been used to evaluate TFS in the laboratory that vary greatly in the stimuli used and whether the judgments require monaural or binaural comparisons of TFS. The purpose of this study was to compare laboratory measures of TFS for inclusion in a battery of suprathreshold auditory tests. A subset of available TFS tasks were selected on the basis of potential clinical utility and were evaluated using metrics that focus on characteristics important for clinical use. Method TFS measures were implemented in replication of studies that demonstrated clinical utility. Monaural, diotic, and dichotic measures were evaluated in 11 young listeners with normal hearing. Measures included frequency modulation (FM) tasks, harmonic frequency shift detection, interaural phase difference (TFS–low frequency), interaural time difference (ITD), monaural gap duration discrimination, and tone detection in noise with and without a difference in interaural phase (N 0 S 0 , N 0 S π ). Data were compared with published results and evaluated with metrics of consistency and efficiency. Results Thresholds obtained were consistent with published data. There was no evidence of predictive relationships among the measures consistent with a homogenous group. The most stable tasks across repeated testing were TFS–low frequency, diotic and dichotic FM, and N 0 S π . Monaural and diotic FM had the lowest normalized variance and were the most efficient accounting for differences in total test duration, followed by ITD. Conclusions Despite a long stimulus duration, FM tasks dominated comparisons of consistency and efficiency. Small differences separated the dichotic tasks FM, ITD, and N 0 S π . Future comparisons following procedural optimization of the tasks will evaluate clinical efficiency in populations with impairment.


2020 ◽  
Author(s):  
Paz Har-shai Yahav ◽  
Elana Zion Golumbic

AbstractPaying attention to one speaker in noisy environments can be extremely difficult. This is because task-irrelevant speech competes for processing resources with attended speech. However, whether this competition is restricted to acoustic-phonetic interference, or if it extends to competition for linguistic processing as well, remains highly debated. To address this debate, here we test whether task-irrelevant speech sounds are integrated over time to form hierarchical representations of lexical and syntactic structures.Neural activity was recorded using Magnetoencephalography (MEG) during a dichotic listening task, where human participants attended to natural speech presented to one ear, and task-irrelevant stimuli were presented to the other. Task-irrelevant stimuli consisted either of random sequences of syllables (Non-Structured), or syllables ordered to form coherent sentences (Structured). Using hierarchical frequency-tagging, the neural signature of different linguistic-hierarchies within the Structured stimuli – namely words, phrases and sentences – can be uniquely discerned from the neural response.We find that, indeed, the phrasal structure of task-irrelevant stimuli was represented in the neural response, primarily in left inferior frontal and posterior parietal regions. Moreover, neural tracking of attended speech in left inferior frontal regions was enhanced when task-irrelevant stimuli were linguistically structured. This pattern suggests that syntactic structurebuilding processes are applied to task-irrelevant speech, at least under these circumstances, and that selective attention does not fully eliminate linguistic processing of task-irrelevant speech. Rather, the inherent competition for linguistic processing resources between the two streams likely results in the increased listening effort experienced when trying to focus selective attention in multi-speaker contexts.Significance statementThis study addresses the fundamental question of how the brain deals with competing speech in noisy environments. Specifically, we ask: when one attempts to focus their attention on a particular speaker, what level of linguistic processing is applied to other, task-irrelevant speech? By measuring neural activity, we find evidence that the phrasal structure of task-irrelevant speech is indeed discerned, indicating that linguistic information is integrated over time and undergoes some syntactic analysis. Moreover, neural responses to attended speech were also enhanced in speech-processing regions, when presented together with comprehensible yet task-irrelevant speech. These results nicely demonstrate the inherent competition for linguistic processing resources among concurrent speech, providing evidence that selective attention does not fully eliminate linguistic processing of task-irrelevant speech.


Author(s):  
K. Hama

The lateral line organs of the sea eel consist of canal and pit organs which are different in function. The former is a low frequency vibration detector whereas the latter functions as an ion receptor as well as a mechano receptor.The fine structure of the sensory epithelia of both organs were studied by means of ordinary transmission electron microscope, high voltage electron microscope and of surface scanning electron microscope.The sensory cells of the canal organ are polarized in front-caudal direction and those of the pit organ are polarized in dorso-ventral direction. The sensory epithelia of both organs have thinner surface coats compared to the surrounding ordinary epithelial cells, which have very thick fuzzy coatings on the apical surface.


Sign in / Sign up

Export Citation Format

Share Document