scholarly journals Interplay between metrical and semantic processing in French: an N400 study

2019 ◽  
Author(s):  
Noémie te Rietmolen ◽  
Radouane El Yagoubi ◽  
Corine Astésano

AbstractFrench accentuation is held to belong to the level of the phrase. Consequently French is considered ‘a language without accent’ with speakers that are ‘deaf to stress’. Recent ERP-studies investigating the French initial accent (IA) however demonstrate listeners to not only discriminate between different stress patterns, but also expect words to be marked with IA early in the process of speech comprehension. Still, as words were presented in isolation, it remains unclear whether the preference applied to the lexical or to the phrasal level. In the current ERP-study, we address this ambiguity and manipulate IA on words embedded in a sentence. Furthermore, we orthogonally manipulate semantic congruity to investigate the interplay between accentuation and later speech processing stages. Results reveal an early fronto-centrally located negative deflection when words are presented without IA, indicating a general dispreference for words presented without IA. Additionally, we found an effect of semantic congruity in the centro-parietal region (the traditional region for N400), which was bigger for words without IA than for words with IA. Furthermore, we observed an interaction between metrical structure and semantic congruity such that ±IA continued to modulate N400 amplitude fronto-centrally, but only in the sentences that were semantically incongruent. The results indicate that presenting word without initial accent hinders semantic conflict resolution. This interpretation is supported by the behavioral data which show that participants were slower and made more errors words had been presented without IA. As participants attended to the semantic content of the sentences, the finding underlines the automaticity of stress processing and indicates that IA may be encoded at a lexical level where it facilitates semantic processing.

2020 ◽  
Author(s):  
Michael P. Broderick ◽  
Nathaniel J. Zuk ◽  
Andrew J. Anderson ◽  
Edmund C. Lalor

AbstractSpeech comprehension relies on the ability to understand the meaning of words within a coherent context. Recent studies have attempted to obtain electrophysiological indices of this process by modelling how brain activity is affected by a word’s semantic dissimilarity to preceding words. While the resulting indices appear robust and are strongly modulated by attention, it remains possible that, rather than capturing the contextual understanding of words, they may actually reflect word-to-word changes in semantic content without the need for a narrative-level understanding on the part of the listener. To test this possibility, we recorded EEG from subjects who listened to speech presented in either its original, narrative form, or after scrambling the word order by varying amounts. This manipulation affected the ability of subjects to comprehend the narrative content of the speech, but not the ability to recognize the individual words. Neural indices of semantic understanding and low-level acoustic processing were derived for each scrambling condition using the temporal response function (TRF) approach. Signatures of semantic processing were observed for conditions where speech was unscrambled or minimally scrambled and subjects were able to understand the speech. The same markers were absent for higher levels of scrambling when speech comprehension dropped below chance. In contrast, word recognition remained high and neural measures related to envelope tracking did not vary significantly across the different scrambling conditions. This supports the previous claim that electrophysiological indices based on the semantic dissimilarity of words to their context reflect a listener’s understanding of those words relative to that context. It also highlights the relative insensitivity of neural measures of low-level speech processing to speech comprehension.


2019 ◽  
Author(s):  
Noémie te Rietmolen ◽  
Radouane El Yagoubi ◽  
Corine Astésano

AbstractIn French, accentuation is not lexically distinctive and tightly intertwined with intonation. This has led to the language being described of as ‘a language without accent’ and to French listeners being alleged ‘deaf to stress’. However, if one considers Di Cristo’s model in which the metrical structure of speech plays a central role, it becomes possible to envision stress templates underlying the cognitive representation of words. This event-related potential (erp) study examined whether French listeners are sensitive to the French primary final accent (fa) and secondary initial accent (ia), and whether the accents are part of the French phonologically expected stress pattern. Two oddball studies were carried out. In the first study, in one condition, deviants were presented without (−fa) and standards with final accent (+fa), while in another condition, these positions were switched. We obtained asymmetric mmn waveforms, such that deviants −fa elicited a larger mmn than deviants +fa (which did not elicit an mmn), pointing toward a preference for stress patterns with fa. Additionally, the difference waveforms between identical stimuli in different positions within the oddball paradigms indicated −fa stimuli to be disfavored whether they were the deviants or the standards. In the second study, standards were always presented with both the initial and final accent, while deviants were presented either without final accent (−fa) or without initial accent (−ia). Here, we obtained mmns both to deviants −fa and to deviants −ia, although −fa deviants elicited a more ample mmn. Nevertheless, the results show that French listeners are not deaf to the initial and final accents, pointing instead to an abstract phonological representation for both accents. In sum, the results argue against the notion of stress deafness for French and instead suggest accentuation to play a more important role in French speech comprehension than is currently acknowledged.


2013 ◽  
Vol 27 (4) ◽  
pp. 149-164 ◽  
Author(s):  
Montserrat Zurrón ◽  
Marta Ramos-Goicoa ◽  
Fernando Díaz

With the aim of establishing the temporal locus of the semantic conflict in color-word Stroop and emotional Stroop phenomena, we analyzed the Event-Related Potentials (ERPs) elicited by nonwords, incongruent and congruent color words, colored words with positive and negative emotional valence, and colored words with neutral valence. The incongruent, positive, negative, and neutral stimuli produced interference in the behavioral response to the color of the stimuli. The P150/N170 amplitude was sensitive to the semantic equivalence of both dimensions of the congruent color words. The P3b amplitude was smaller in response to incongruent color words and to positive, negative, and neutral colored words than in response to the congruent color words and colored nonwords. There were no differences in the ERPs induced in response to colored words with positive, negative, and neutral valence. Therefore, the P3b amplitude was sensitive to interference from the semantic content of the incongruent, positive, negative, and neutral words in the color-response task, independently of the emotional content of the colored words. In addition, the P3b amplitude was smaller in response to colored words with positive, negative, and neutral valence than in response to the incongruent color words. Overall, these data indicate that the temporal locus of the semantic conflict generated by the incongruent color words (in the color-word Stroop task) and by colored words with positive, negative, and neutral valence (in the emotional Stroop task) appears to occur in the range 300–450 ms post-stimulus.


2017 ◽  
Vol 29 (7) ◽  
pp. 1119-1131 ◽  
Author(s):  
Katerina D. Kandylaki ◽  
Karen Henrich ◽  
Arne Nagels ◽  
Tilo Kircher ◽  
Ulrike Domahs ◽  
...  

While listening to continuous speech, humans process beat information to correctly identify word boundaries. The beats of language are stress patterns that are created by combining lexical (word-specific) stress patterns and the rhythm of a specific language. Sometimes, the lexical stress pattern needs to be altered to obey the rhythm of the language. This study investigated the interplay of lexical stress patterns and rhythmical well-formedness in natural speech with fMRI. Previous electrophysiological studies on cases in which a regular lexical stress pattern may be altered to obtain rhythmical well-formedness showed that even subtle rhythmic deviations are detected by the brain if attention is directed toward prosody. Here, we present a new approach to this phenomenon by having participants listen to contextually rich stories in the absence of a task targeting the manipulation. For the interaction of lexical stress and rhythmical well-formedness, we found one suprathreshold cluster localized between the cerebellum and the brain stem. For the main effect of lexical stress, we found higher BOLD responses to the retained lexical stress pattern in the bilateral SMA, bilateral postcentral gyrus, bilateral middle fontal gyrus, bilateral inferior and right superior parietal lobule, and right precuneus. These results support the view that lexical stress is processed as part of a sensorimotor network of speech comprehension. Moreover, our results connect beat processing in language to domain-independent timing perception.


2019 ◽  
Vol 30 (3) ◽  
pp. 942-951 ◽  
Author(s):  
Lanfang Liu ◽  
Yuxuan Zhang ◽  
Qi Zhou ◽  
Douglas D Garrett ◽  
Chunming Lu ◽  
...  

Abstract Whether auditory processing of speech relies on reference to the articulatory motor information of speaker remains elusive. Here, we addressed this issue under a two-brain framework. Functional magnetic resonance imaging was applied to record the brain activities of speakers when telling real-life stories and later of listeners when listening to the audio recordings of these stories. Based on between-brain seed-to-voxel correlation analyses, we revealed that neural dynamics in listeners’ auditory temporal cortex are temporally coupled with the dynamics in the speaker’s larynx/phonation area. Moreover, the coupling response in listener’s left auditory temporal cortex follows the hierarchical organization for speech processing, with response lags in A1+, STG/STS, and MTG increasing linearly. Further, listeners showing greater coupling responses understand the speech better. When comprehension fails, such interbrain auditory-articulation coupling vanishes substantially. These findings suggest that a listener’s auditory system and a speaker’s articulatory system are inherently aligned during naturalistic verbal interaction, and such alignment is associated with high-level information transfer from the speaker to the listener. Our study provides reliable evidence supporting that references to the articulatory motor information of speaker facilitate speech comprehension under a naturalistic scene.


2015 ◽  
Vol 122 (2) ◽  
pp. 250-261 ◽  
Author(s):  
Edward F. Chang ◽  
Kunal P. Raygor ◽  
Mitchel S. Berger

Classic models of language organization posited that separate motor and sensory language foci existed in the inferior frontal gyrus (Broca's area) and superior temporal gyrus (Wernicke's area), respectively, and that connections between these sites (arcuate fasciculus) allowed for auditory-motor interaction. These theories have predominated for more than a century, but advances in neuroimaging and stimulation mapping have provided a more detailed description of the functional neuroanatomy of language. New insights have shaped modern network-based models of speech processing composed of parallel and interconnected streams involving both cortical and subcortical areas. Recent models emphasize processing in “dorsal” and “ventral” pathways, mediating phonological and semantic processing, respectively. Phonological processing occurs along a dorsal pathway, from the posterosuperior temporal to the inferior frontal cortices. On the other hand, semantic information is carried in a ventral pathway that runs from the temporal pole to the basal occipitotemporal cortex, with anterior connections. Functional MRI has poor positive predictive value in determining critical language sites and should only be used as an adjunct for preoperative planning. Cortical and subcortical mapping should be used to define functional resection boundaries in eloquent areas and remains the clinical gold standard. In tracing the historical advancements in our understanding of speech processing, the authors hope to not only provide practicing neurosurgeons with additional information that will aid in surgical planning and prevent postoperative morbidity, but also underscore the fact that neurosurgeons are in a unique position to further advance our understanding of the anatomy and functional organization of language.


1993 ◽  
Vol 36 (5) ◽  
pp. 1083-1096 ◽  
Author(s):  
Susan Jerger ◽  
Gayle Stout ◽  
Marilyn Kent ◽  
Elizabeth Albritton ◽  
Louise Loiselle ◽  
...  

The accurate perception of speech involves the processing of multidimensional information. The aim of this study was to determine the influence of the semantic dimension on the processing of the auditory dimension of speech by children with hearing impairment. The processing interactions characterizing the semantic and auditory dimensions were assessed with a pediatric auditory Stroop task. The subjects, 20 children with hearing impairment and 60 children with normal hearing, were instructed to attend selectively to the voice-gender of speech targets while ignoring the semantic content. The type of target was manipulated to represent conflicting, neutral, and congruent relations between dimensions (e.g., the male voice saying "Mommy," "ice cream," or "Daddy" respectively). The normal-hearing listeners could not ignore the irrelevant semantic content. Instead, reaction times were slower to the conflict targets (Stroop interference) and faster to the congruent targets (Stroop congruency). The subjects with hearing impairment showed prominent Stroop congruency, but minimal Stroop interference. Reduced Stroop interference was not associated with chronological age, a speed-accuracy tradeoff, a non-neutral baseline, or relatively poorer discriminability of the word input. The present results suggest that the voice-gender and semantic dimensions of speech were not processed independently by these children, either those with or those without hearing loss. However, the to-be-ignored semantic dimension exerted a less consistent influence on the processing of the voice-gender dimension in the presence of childhood hearing loss. The overall pattern of results suggests that speech processing by children with hearing impairment is carried out in a less stimulus-bound manner.


1977 ◽  
Vol 29 (1) ◽  
pp. 135-146 ◽  
Author(s):  
D. W. Green

Two independent groups of subjects, under instruction orienting them towards understanding or towards memorizing sentences were timed to respond to a brief auditory signal which occurred at some point during the course of a sentence. Latency appeared to be primarily a function of the task, such that the deeper the semantic processing of the sentence the longer the reaction time. Together with other aspects of the data, it is argued that such tasks affect the extent to which a subject retrieves the meanings of the words in a sentence and integrates them at the end of it. Concrete and abstract sentences were processed in fundamentally the same way. The conclusion drawn is that speech comprehension is an integrative process, under voluntary control, which collates together different aspects of the speech signal.


2018 ◽  
Author(s):  
Bohan Dai ◽  
James M. McQueen ◽  
René Terporten ◽  
Peter Hagoort ◽  
Anne Kösem

AbstractListening to speech is difficult in noisy environments, and is even harder when the interfering noise consists of intelligible speech as compared to non-intelligible sounds. This suggests that the ignored speech is not fully ignored, and that competing linguistic information interferes with the neural processing of target speech. We tested this hypothesis using magnetoencephalography (MEG) while participants listened to target clear speech in the presence of distracting noise-vocoded signals. Crucially, the noise vocoded distractors were initially unintelligible but were perceived as intelligible speech after a small training session. We compared participants’ performance in the speech-in-noise task before and after training, and neural entrainment to both target and distracting speech. The comprehension of the target clear speech was reduced in the presence of intelligible distractors as compared to when they were unintelligible. The neural entrainment to target speech in the delta range (1–4 Hz) reduced in strength in the presence of an intelligible distractor. In contrast, neural entrainment to distracting signals was not significantly modulated by intelligibility. These results support and extend previous findings, showing, first, that the masking effects of distracting speech originate from the degradation of the linguistic representation of target speech, and second, that delta entrainment reflects linguistic processing of speech.Significance StatementComprehension of speech in noisy environments is impaired due to interference from background sounds. The magnitude of interference depends on the intelligibility of the distracting speech signals. In a magnetoencephalography experiment with a highly-controlled training paradigm, we show that the linguistic information of distracting speech imposes higher-order interference on the processing of the target speech, as indexed by a decline of comprehension of target speech and a reduction of delta entrainment to target speech. This work demonstrates the importance of neural oscillations for speech processing. It shows that delta oscillations reflect linguistic analysis during speech comprehension, which can critically be affected by the presence of other speech.


Sign in / Sign up

Export Citation Format

Share Document