scholarly journals Linguistic processing of task-irrelevant speech at a Cocktail Party

2020 ◽  
Author(s):  
Paz Har-shai Yahav ◽  
Elana Zion Golumbic

AbstractPaying attention to one speaker in noisy environments can be extremely difficult. This is because task-irrelevant speech competes for processing resources with attended speech. However, whether this competition is restricted to acoustic-phonetic interference, or if it extends to competition for linguistic processing as well, remains highly debated. To address this debate, here we test whether task-irrelevant speech sounds are integrated over time to form hierarchical representations of lexical and syntactic structures.Neural activity was recorded using Magnetoencephalography (MEG) during a dichotic listening task, where human participants attended to natural speech presented to one ear, and task-irrelevant stimuli were presented to the other. Task-irrelevant stimuli consisted either of random sequences of syllables (Non-Structured), or syllables ordered to form coherent sentences (Structured). Using hierarchical frequency-tagging, the neural signature of different linguistic-hierarchies within the Structured stimuli – namely words, phrases and sentences – can be uniquely discerned from the neural response.We find that, indeed, the phrasal structure of task-irrelevant stimuli was represented in the neural response, primarily in left inferior frontal and posterior parietal regions. Moreover, neural tracking of attended speech in left inferior frontal regions was enhanced when task-irrelevant stimuli were linguistically structured. This pattern suggests that syntactic structurebuilding processes are applied to task-irrelevant speech, at least under these circumstances, and that selective attention does not fully eliminate linguistic processing of task-irrelevant speech. Rather, the inherent competition for linguistic processing resources between the two streams likely results in the increased listening effort experienced when trying to focus selective attention in multi-speaker contexts.Significance statementThis study addresses the fundamental question of how the brain deals with competing speech in noisy environments. Specifically, we ask: when one attempts to focus their attention on a particular speaker, what level of linguistic processing is applied to other, task-irrelevant speech? By measuring neural activity, we find evidence that the phrasal structure of task-irrelevant speech is indeed discerned, indicating that linguistic information is integrated over time and undergoes some syntactic analysis. Moreover, neural responses to attended speech were also enhanced in speech-processing regions, when presented together with comprehensible yet task-irrelevant speech. These results nicely demonstrate the inherent competition for linguistic processing resources among concurrent speech, providing evidence that selective attention does not fully eliminate linguistic processing of task-irrelevant speech.

eLife ◽  
2021 ◽  
Vol 10 ◽  
Author(s):  
Paz Har-shai Yahav ◽  
Elana Zion Golumbic

Paying attention to one speaker in noisy environments can be extremely difficult, because to-be-attended and task-irrelevant speech compete for processing resources. We tested whether this competition is restricted to acoustic-phonetic interference or if it extends to competition for linguistic processing as well. Neural activity was recorded using Magnetoencephalography as human participants were instructed to attended to natural speech presented to one ear, and task-irrelevant stimuli were presented to the other. Task-irrelevant stimuli consisted either of random sequences of syllables, or syllables structured to form coherent sentences, using hierarchical frequency-tagging. We find that the phrasal structure of structured task-irrelevant stimuli was represented in the neural response in left inferior frontal and posterior parietal regions, indicating that selective attention does not fully eliminate linguistic processing of task-irrelevant speech. Additionally, neural tracking of to-be-attended speech in left inferior frontal regions was enhanced when competing with structured task-irrelevant stimuli, suggesting inherent competition between them for linguistic processing.


2021 ◽  
pp. 1-14
Author(s):  
Octave Etard ◽  
Rémy Ben Messaoud ◽  
Gabriel Gaugain ◽  
Tobias Reichenbach

Abstract Speech and music are spectrotemporally complex acoustic signals that are highly relevant for humans. Both contain a temporal fine structure that is encoded in the neural responses of subcortical and cortical processing centers. The subcortical response to the temporal fine structure of speech has recently been shown to be modulated by selective attention to one of two competing voices. Music similarly often consists of several simultaneous melodic lines, and a listener can selectively attend to a particular one at a time. However, the neural mechanisms that enable such selective attention remain largely enigmatic, not least since most investigations to date have focused on short and simplified musical stimuli. Here, we studied the neural encoding of classical musical pieces in human volunteers, using scalp EEG recordings. We presented volunteers with continuous musical pieces composed of one or two instruments. In the latter case, the participants were asked to selectively attend to one of the two competing instruments and to perform a vibrato identification task. We used linear encoding and decoding models to relate the recorded EEG activity to the stimulus waveform. We show that we can measure neural responses to the temporal fine structure of melodic lines played by one single instrument, at the population level as well as for most individual participants. The neural response peaks at a latency of 7.6 msec and is not measurable past 15 msec. When analyzing the neural responses to the temporal fine structure elicited by competing instruments, we found no evidence of attentional modulation. We observed, however, that low-frequency neural activity exhibited a modulation consistent with the behavioral task at latencies from 100 to 160 msec, in a similar manner to the attentional modulation observed in continuous speech (N100). Our results show that, much like speech, the temporal fine structure of music is tracked by neural activity. In contrast to speech, however, this response appears unaffected by selective attention in the context of our experiment.


1988 ◽  
Vol 32 (2) ◽  
pp. 168-172 ◽  
Author(s):  
Christopher D. Wickens ◽  
Kelly Harwood ◽  
Leon Segal ◽  
Inge Tkalcevic ◽  
Bill Sherman

The objective of this research was to establish the validity of predictive models of workload in the context of a controlled simulation of a helicopter flight mission. The models that were evaluated contain increasing levels of sophistication regarding their assumptions about the competition for processing resources underlying multiple task performance. Ten subjects performed the simulation which involved various combinations of a low level flight task with three cognitive side tasks, pertaining to navigation, spatial awareness and computation. Side task information was delivered auditorily or visually. Results indicated that subjective workload is best predicted by relatively simple models that simply integrate the total demands of tasks over time (r = 0.65). In contrast, performance is not well predicted by these models (r < .10), but is best predicted by models that assume differential competition between processing resources (r = 0.47). The relevance of these data to predictive models and to the use of subjective measures for model validation is discussed.


2019 ◽  
Vol 2019 (1) ◽  
Author(s):  
Erik L Meijs ◽  
Pim Mostert ◽  
Heleen A Slagter ◽  
Floris P de Lange ◽  
Simon van Gaal

Abstract Subjective experience can be influenced by top-down factors, such as expectations and stimulus relevance. Recently, it has been shown that expectations can enhance the likelihood that a stimulus is consciously reported, but the neural mechanisms supporting this enhancement are still unclear. We manipulated stimulus expectations within the attentional blink (AB) paradigm using letters and combined visual psychophysics with magnetoencephalographic (MEG) recordings to investigate whether prior expectations may enhance conscious access by sharpening stimulus-specific neural representations. We further explored how stimulus-specific neural activity patterns are affected by the factors expectation, stimulus relevance and conscious report. First, we show that valid expectations about the identity of an upcoming stimulus increase the likelihood that it is consciously reported. Second, using a series of multivariate decoding analyses, we show that the identity of letters presented in and out of the AB can be reliably decoded from MEG data. Third, we show that early sensory stimulus-specific neural representations are similar for reported and missed target letters in the AB task (active report required) and an oddball task in which the letter was clearly presented but its identity was task-irrelevant. However, later sustained and stable stimulus-specific representations were uniquely observed when target letters were consciously reported (decision-dependent signal). Fourth, we show that global pre-stimulus neural activity biased perceptual decisions for a ‘seen’ response. Fifth and last, no evidence was obtained for the sharpening of sensory representations by top-down expectations. We discuss these findings in light of emerging models of perception and conscious report highlighting the role of expectations and stimulus relevance.


Neurology ◽  
2012 ◽  
Vol 78 (Meeting Abstracts 1) ◽  
pp. PD7.009-PD7.009
Author(s):  
B. Alperin ◽  
A. Haring ◽  
T. Zhuravleva ◽  
P. Holcomb ◽  
D. Rentz ◽  
...  

2007 ◽  
Vol 19 (2) ◽  
pp. 315-330 ◽  
Author(s):  
Kurt E. Weaver ◽  
Alexander A. Stevens

Visual deprivation early in life results in occipital cortical responsiveness across a broad range of perceptual and cognitive tasks. In the reorganized occipital cortex of early blind (EB) individuals, the relative lack of specificity for particular sensory stimuli and tasks suggests that attention effects may play a prominent role in these areas. We wished to establish whether occipital cortical areas in the EB were responsive to stimuli across sensory modalities (auditory, tactile) and whether these areas maintained or altered their activity as a function of selective attention. Using a three-stimulus oddball paradigm and event-related functional magnetic resonance imaging, auditory and tactile tasks presented separately demonstrated that several occipital regions of interest (ROIs) in the EB, but not sighted controls (SCs), responded to targets and task-irrelevant distracter stimuli of both modalities. When auditory and tactile stimuli were presented simultaneously with subjects alternating attention between sensory streams, only the calcarine sulcus continued to respond to stimuli in both modalities. In all other ROIs, responses to auditory targets were as large or larger than those observed in the auditory-alone condition, but responses to tactile targets were attenuated or abolished by the presence of unattended auditory stimuli. Both auditory and somatosensory cortices responded consistently to auditory and tactile targets, respectively. These results reveal mechanisms of orienting and selective attention within the visual cortex of EB individuals and suggest that mechanisms of enhancement and suppression interact asymmetrically on auditory and tactile streams during bimodal sensory presentation.


2018 ◽  
Vol 183 ◽  
pp. 155-161
Author(s):  
Natania A. Crane ◽  
Stephanie M. Gorka ◽  
Katie L. Burkhouse ◽  
Kaveh Afshar ◽  
Justin E. Greenstein ◽  
...  

2018 ◽  
Author(s):  
Jens Kreitewolf ◽  
Malte Wöstmann ◽  
Sarah Tune ◽  
Michael Plöchl ◽  
Jonas Obleser

AbstractWhen listening, familiarity with an attended talker’s voice improves speech comprehension. Here, we instead investigated the effect of familiarity with a distracting talker. In an irrelevant-speech task, we assessed listeners’ working memory for the serial order of spoken digits when a task-irrelevant, distracting sentence was produced by either a familiar or an unfamiliar talker (with rare omissions of the task-irrelevant sentence). We tested two groups of listeners using the same experimental procedure. The first group were undergraduate psychology students (N=66) who had attended an introductory statistics course. Critically, each student had been taught by one of two course instructors, whose voices served as familiar and unfamiliar task-irrelevant talkers. The second group of listeners were family members and friends (N=20) who had known either one of the two talkers for more than ten years. Students, but not family members and friends, made more errors when the task-irrelevant talker was familiar versus unfamiliar. Interestingly, the effect of talker familiarity was not modulated by the presence of task-irrelevant speech: students experienced stronger working-memory disruption by a familiar talker irrespective of whether they heard a task-irrelevant sentence during memory retention or merely expected it. While previous work has shown that familiarity with an attended talker benefits speech comprehension, our findings indicate that familiarity with an ignored talker deteriorates working memory for target speech. The absence of this effect in family members and friends suggests that the degree of familiarity modulates memory disruption.


Sign in / Sign up

Export Citation Format

Share Document