scholarly journals A neural signature of automatic lexical access in bilinguals

2021 ◽  
Author(s):  
Sabrina Aristei ◽  
Aliette Lochy ◽  
Bruno Rossion ◽  
Christine Schiltz

Bilingualism is often associated with beneficial effects on cognitive control and top-down processes. The present study aimed at bypassing these processes to assess automatic visual word recognition in bilinguals. Using fast periodic visual stimulation, we recorded frequency-tagged word-selective EEG responses in French monolinguals and late bilinguals (German native, French as second language). Words were presented centrally within rapid (10 Hz) sequences of letter strings varying in word-likeness, i.e., consonant strings, non-words, pseudo-words, while participants performed an orthogonal task. Automatic word-selective brain responses in the occipito-temporal cortex arose almost exclusively for the languages mastered by participants: two in bilinguals vs. one in monolinguals. Importantly, the amplitude of bilinguals responses to words within consonant strings were unaffected by the native vs. late-learnt status of the language. Furthermore, for all and only known languages, word-selective responses were reduced by embedding them in pseudo-words relative to non-words, both derived from the same language as the words. This word-likeness effect highlights the lexical nature of the recorded brain visual responses. A cross-language word-likeness effect was observed only in bilinguals and only with pseudo-words derived from the native language, indicating an experience-based tuning to language. Taken together these findings indicate that the amount of exposure to a language determines the engagement of neural resources devoted to word processing in the occipito-temporal visual cortex. We conclude that automatic lexical coding occurs at early visual processing in bilinguals and monolinguals alike, and that language exposure determines the competition strength of a language.

2017 ◽  
Vol 114 (22) ◽  
pp. E4501-E4510 ◽  
Author(s):  
Job van den Hurk ◽  
Marc Van Baelen ◽  
Hans P. Op de Beeck

To what extent does functional brain organization rely on sensory input? Here, we show that for the penultimate visual-processing region, ventral-temporal cortex (VTC), visual experience is not the origin of its fundamental organizational property, category selectivity. In the fMRI study reported here, we presented 14 congenitally blind participants with face-, body-, scene-, and object-related natural sounds and presented 20 healthy controls with both auditory and visual stimuli from these categories. Using macroanatomical alignment, response mapping, and surface-based multivoxel pattern analysis, we demonstrated that VTC in blind individuals shows robust discriminatory responses elicited by the four categories and that these patterns of activity in blind subjects could successfully predict the visual categories in sighted controls. These findings were confirmed in a subset of blind participants born without eyes and thus deprived from all light perception since conception. The sounds also could be decoded in primary visual and primary auditory cortex, but these regions did not sustain generalization across modalities. Surprisingly, although not as strong as visual responses, selectivity for auditory stimulation in visual cortex was stronger in blind individuals than in controls. The opposite was observed in primary auditory cortex. Overall, we demonstrated a striking similarity in the cortical response layout of VTC in blind individuals and sighted controls, demonstrating that the overall category-selective map in extrastriate cortex develops independently from visual experience.


2017 ◽  
Author(s):  
Amra Covic ◽  
Christian Keitel ◽  
Emanuele Porcu ◽  
Erich Schröger ◽  
Matthias M Müller

ABSTRACTThe neural processing of a visual stimulus can be facilitated by attending to its position or by a co-occurring auditory tone. Using frequency-tagging we investigated whether facilitation by spatial attention and audio-visual synchrony rely on similar neural processes. Participants attended to one of two flickering Gabor patches (14.17 and 17 Hz) located in opposite lower visual fields. Gabor patches further “pulsed” (i.e. showed smooth spatial frequency variations) at distinct rates (3.14 and 3.63 Hz). Frequency-modulating an auditory stimulus at the pulse-rate of one of the visual stimuli established audio-visual synchrony. Flicker and pulsed stimulation elicited stimulus-locked rhythmic electrophysiological brain responses that allowed tracking the neural processing of simultaneously presented stimuli. These steady-state responses (SSRs) were quantified in the spectral domain to examine visual stimulus processing under conditions of synchronous vs. asynchronous tone presentation and when respective stimulus positions were attended vs. unattended. Strikingly, unique patterns of effects on pulse- and flicker driven SSRs indicated that spatial attention and audiovisual synchrony facilitated early visual processing in parallel and via different cortical processes. We found attention effects to resemble the classical top-down gain effect facilitating both, flicker and pulse-driven SSRs. Audio-visual synchrony, in turn, only amplified synchrony-producing stimulus aspects (i.e. pulse-driven SSRs) possibly highlighting the role of temporally co-occurring sights and sounds in bottom-up multisensory integration.


Author(s):  
Simen Hagen ◽  
Aliette Lochy ◽  
Corentin Jacques ◽  
Louis Maillard ◽  
Sophie Colnat-Coulbois ◽  
...  

AbstractThe extent to which faces and written words share neural circuitry in the human brain is actively debated. Here, we compare face-selective and word-selective responses in a large group of patients (N = 37) implanted with intracerebral electrodes in the ventral occipito-temporal cortex (VOTC). Both face-selective (i.e., significantly different responses to faces vs. non-face visual objects) and word-selective (i.e., significantly different responses to words vs. pseudofonts) neural activity is isolated with frequency-tagging. Critically, this sensitive approach allows to objectively quantify category-selective neural responses and disentangle them from general visual responses. About 70% of significant electrode contacts show either face-selectivity or word-selectivity only, with the expected right and left hemispheric dominance, respectively. Spatial dissociations are also found within core regions of face and word processing, with a medio-lateral dissociation in the fusiform gyrus (FG) and surrounding sulci, respectively. In the 30% of overlapping face- and word-selective contacts across the VOTC or in the FG and surrounding sulci, between-category-selective amplitudes (faces vs. words) show no-to-weak correlations, despite strong correlations in both the within-category-selective amplitudes (face–face, word–word) and the general visual responses to words and faces. Overall, these observations support the view that category-selective circuitry for faces and written words is largely dissociated in the human adult VOTC.


2020 ◽  
Author(s):  
Matthew J. Boring ◽  
Edward H. Silson ◽  
Michael J. Ward ◽  
R. Mark Richardson ◽  
Julie A. Fiez ◽  
...  

AbstractThe map of category-selectivity in human ventral temporal cortex (VTC) provides organizational constraints to models of object recognition. One important principle is lateral-medial response biases to stimuli that are typically viewed in the center or periphery of the visual field. However, little is known about the relative temporal dynamics and location of regions that respond preferentially to stimulus classes that are centrally viewed, like the face and word processing networks. Here, word- and face-selective regions within VTC were mapped using intracranial recordings from 36 patients. Partially overlapping, but also anatomically dissociable patches of face and word selectivity were found in ventral temporal cortex. In addition to canonical word-selective regions along the left posterior occipitotemporal sulcus, selectivity was also located medial and anterior to face-selective regions on the fusiform gyrus at the group level and within individual subjects. These regions were replicated using 7-Tesla fMRI in healthy subjects. Left hemisphere word-selective regions preceded right hemisphere responses by 125 ms, potentially reflecting the left hemisphere bias for language; with no hemispheric difference in face-selective response latency. Word-selective regions along the posterior fusiform responded first, then spread medially and laterally, then anteriorally. Face-selective responses were first seen in posterior fusiform regions bilaterally, then proceeded anteriorally from there. For both words and faces, the relative delay between regions was longer than would be predicted by purely feedforward models of visual processing. The distinct time-courses of responses across these regions, and between hemispheres, suggest a complex and dynamic functional circuit supports face and word perception.Significance StatementRepresentations of visual objects in the human brain have been shown to be organized by several principles, including whether those objects tend to be viewed centrally or in the periphery of the visual field. However, it remains unclear how regions that process objects that are viewed centrally, like words and faces, are organized relative to one another. Here, direct neural recordings and 7T fMRI demonstrate that several intermingled regions in ventral temporal cortex participate in word and face processing. These regions display differences in their temporal dynamics and response characteristics, both within and between brain hemispheres, suggesting they play different roles in perception. These results illuminate extended, bilateral, and dynamic brain pathways that support face perception and reading.


2020 ◽  
Author(s):  
Simen Hagen ◽  
Aliette Lochy ◽  
Corentin Jacques ◽  
Louis Maillard ◽  
Sophie Colnat-Coulbois ◽  
...  

AbstractThe extent to which faces and written words share neural circuitry in the human brain is actively debated. Here we compared face-selective and word-selective responses in a large group of patients (N = 37) implanted with intracerebral depth electrodes in the ventral occipito-temporal cortex (VOTC). Both face-selective (i.e., significantly different responses to faces vs. nonface visual objects) and word-selective (i.e., significantly different responses to words vs. pseudofonts) neural activity is isolated through frequency-tagging. Critically, this sensitive approach allows to objectively quantify category-selective neural responses and disentangle them from general visual responses. About 70% of significant contacts show either only face-selectivity or only word-selectivity, with the expected right and left hemispheric dominance, respectively. Spatial dissociations are also found within core regions of face and word processing, with a medio-lateral dissociation in the fusiform gyrus (FG) and surrounding sulci, while a postero-anterior dissociation is found in the inferior occipital gyrus (IOG). Only 30% of the significant contacts show both face- and word-selective responses. Critically, in these contacts, across the VOTC or in the FG and surrounding sulci, between-category selective-amplitudes (faces vs. words) showed no-to-weak correlations, despite strong correlations in both the within-category selective amplitudes (face-face, word-word) and the general visual responses to words and faces. Overall, we conclude that category-selectivity for faces and written words is largely dissociated in the human VOTC.Significance StatementIn modern human societies, faces and written words have become arguably the most significant stimuli of the visual environment. Despite extensive research in neuropsychology, electroencephalography and neuroimaging over the past three decades, whether these two types of visual signals are recognized by similar or dissociated processes and neural networks remains unclear. Here we provide an original contribution to this outstanding scientific issue by directly comparing frequency-tagged face- and word-selective neural responses in a large group of epileptic patients implanted with intracerebral electrodes covering the ventral occipito-temporal cortex. While general visual responses to words and faces show significant overlap, the respective category-selective responses are neatly dissociated in spatial location and magnitude, pointing to largely dissociated processes and neural networks.


2018 ◽  
Author(s):  
Christian Keitel ◽  
Anne Keitel ◽  
Christopher SY Benwell ◽  
Christoph Daube ◽  
Gregor Thut ◽  
...  

Two largely independent research lines use rhythmic sensory stimulation to study visual processing. Despite the use of strikingly similar experimental paradigms, they differ crucially in their notion of the stimulus-driven periodic brain responses: One regards them mostly as synchronised (entrained) intrinsic brain rhythms; the other assumes they are predominantly evoked responses (classically termed steady-state responses, or SSRs) that add to the ongoing brain activity. This conceptual difference can produce contradictory predictions about, and interpretations of, experimental outcomes. The effect of spatial attention on brain rhythms in the alpha-band (8-13 Hz) is one such instance: alpha-range SSRs have typically been found to increase in power when participants focus their spatial attention on laterally presented stimuli, in line with a gain control of the visual evoked response. In nearly identical experiments, retinotopic decreases in entrained alpha-band power have been reported, in line with the inhibitory function of intrinsic alpha. Here we reconcile these contradictory findings by showing that they result from a small but far-reaching difference between two common approaches to EEG spectral decomposition. In a new analysis of previously published EEG data, recorded during bilateral rhythmic visual stimulation, we find the typical SSR gain effect when emphasising stimulus-locked neural activity and the typical retinotopic alpha suppression when focusing on ongoing rhythms. These opposite but parallel effects suggest that spatial attention may bias the neural processing of dynamic visual stimulation via two complementary neural mechanisms.


Author(s):  
Elena Tribushinina ◽  
Mila Irmawati ◽  
Pim Mak

Abstract There is no agreement regarding the relationship between narrative abilities in the two languages of a bilingual child. In this paper, we test the hypothesis that such cross-language relationships depend on age and language exposure by studying the narrative skills of 32 Indonesian-Dutch bilinguals (mean age: 8;5, range: 5;0–11;9). The narratives were elicited by means of the Multilingual Assessment Instrument for Narratives (MAIN) and analysed for story structure, episodic complexity and use of internal state terms (ISTs) in the home language (Indonesian) and majority language (Dutch). The results demonstrate that story structure scores in the home language (but not in the majority language) were positively related to age. Exposure measures (current Dutch/Indonesian input, current richness of Dutch/Indonesian input, and length of exposure to Dutch) did not predict the macrostructure scores. There was a significant positive cross-language relationship in story structure and episodic complexity, and this relationship became stronger as a function of length of exposure to Dutch. There was also a positive cross-lingual relation in IST use, but it became weaker with age. The results support the idea that narrative skills are transferable between languages and suggest that cross-language relationships may interact with age and exposure factors in differential ways.


Author(s):  
Geqi Qi ◽  
Jinglong Wu

The sensitivity of the left ventral occipito-temporal (vOT) cortex to visual word processing has triggered a considerable debate about the functional role of this region in reading. The debate rests largely on the issue whether this particular region is specifically dedicated to reading and the extraction of invariant visual word form. A lot of studies have been conducted to provide evidences supporting or against the functional specialization of this region. However, the trend is showing that the different functional properties proposed by the two kinds of view are not in conflict with each other, but instead show different sides of the same fact. Here, the authors focus on two questions: firstly, where do the two views conflict, and secondly, how do they fit with each other on a larger framework of functional organization in object vision pathway? This review evaluates findings from the two sides of the debate for a broader understanding of the functional role of the left vOT cortex.


1991 ◽  
Vol 66 (2) ◽  
pp. 485-496 ◽  
Author(s):  
D. L. Robinson ◽  
J. W. McClurkin ◽  
C. Kertzman ◽  
S. E. Petersen

1. We recorded from single neurons in awake, trained rhesus monkeys in a lighted environment and compared responses to stimulus movement during periods of fixation with those to motion caused by saccadic or pursuit eye movements. Neurons in the inferior pulvinar (PI), lateral pulvinar (PL), and superior colliculus were tested. 2. Cells in PI and PL respond to stimulus movement over a wide range of speeds. Some of these cells do not respond to comparable stimulus motion, or discharge only weakly, when it is generated by saccadic or pursuit eye movements. Other neurons respond equivalently to both types of motion. Cells in the superficial layers of the superior colliculus have similar properties to those in PI and PL. 3. When tested in the dark to reduce visual stimulation from the background, cells in PI and PL still do not respond to motion generated by eye movements. Some of these cells have a suppression of activity after saccadic eye movements made in total darkness. These data suggest that an extraretinal signal suppresses responses to visual stimuli during eye movements. 4. The suppression of responses to stimuli during eye movements is not an absolute effect. Images brighter than 2.0 log units above background illumination evoke responses from cells in PI and PL. The suppression appears stronger in the superior colliculus than in PI and PL. 5. These experiments demonstrate that many cells in PI and PL have a suppression of their responses to stimuli that cross their receptive fields during eye movements. These cells are probably suppressed by an extraretinal signal. Comparable effects are present in the superficial layers of the superior colliculus. These properties in PI and PL may reflect the function of the ascending tectopulvinar system.


2014 ◽  
Vol 112 (2) ◽  
pp. 353-361 ◽  
Author(s):  
Xiaodong Chen ◽  
Gregory C. DeAngelis ◽  
Dora E. Angelaki

The ventral intraparietal area (VIP) processes multisensory visual, vestibular, tactile, and auditory signals in diverse reference frames. We recently reported that visual heading signals in VIP are represented in an approximately eye-centered reference frame when measured using large-field optic flow stimuli. No VIP neuron was found to have head-centered visual heading tuning, and only a small proportion of cells had reference frames that were intermediate between eye- and head-centered. In contrast, previous studies using moving bar stimuli have reported that visual receptive fields (RFs) in VIP are head-centered for a substantial proportion of neurons. To examine whether these differences in previous findings might be due to the neuronal property examined (heading tuning vs. RF measurements) or the type of visual stimulus used (full-field optic flow vs. a single moving bar), we have quantitatively mapped visual RFs of VIP neurons using a large-field, multipatch, random-dot motion stimulus. By varying eye position relative to the head, we tested whether visual RFs in VIP are represented in head- or eye-centered reference frames. We found that the vast majority of VIP neurons have eye-centered RFs with only a single neuron classified as head-centered and a small minority classified as intermediate between eye- and head-centered. Our findings suggest that the spatial reference frames of visual responses in VIP may depend on the visual stimulation conditions used to measure RFs and might also be influenced by how attention is allocated during stimulus presentation.


Sign in / Sign up

Export Citation Format

Share Document