scholarly journals Is human face recognition lateralized to the right hemisphere due to neural competition with left-lateralized visual word recognition? A critical review

Author(s):  
Bruno Rossion ◽  
Aliette Lochy
2019 ◽  
Vol 25 (2) ◽  
pp. 214-233
Author(s):  
Filiz Mergen ◽  
Gulmira Kuruoglu

Recently obtained data from interdisciplinary research has expanded our knowledge on the relationship between language and the brain considerably. Numerous aspects of language have been the subject of research. Visual word recognition is a temporal process which starts with recognizing the physical features of words and matching them with potential candidates in the mental lexicon. Word frequency plays a significant role in this process. Other factors are the similarities in spelling and pronunciation, and whether words have meanings or are simply letter strings. The emotional load of the words is another factor that deserves a closer inspection as an overwhelming amount of evidence supports the privileged status of emotions both in verbal and nonverbal tasks. It is well-established that lexical processing is handled by the involvement of the brain hemispheres to varying degrees, and that the left hemisphere has greater involvement in verbal tasks as compared to the right hemisphere. Also, the emotional load of the verbal stimuli affects the specialized roles of the brain hemispheres in lexical processing. Despite the abundance of research on processing of words that belong to languages from a variety of language families, the number of studies that investigated Turkish, a language of Uralic-Altaic origin, is scarce. This study aims to fill the gap in the literature by reporting evidence on how Turkish words with and without emotional load are processed and represented in the brain. We employed a visual hemifield paradigm and a lexical decision task. The participants were instructed to decide if the letter strings presented either from the right or the left of the computer screen were real words or non-words. Their response times and accuracy of their answers were recorded. We obtained shorter response times and higher accuracy rates for real words than non-words as reported in the majority of studies in the literature. We also found that the emotional load modulated the recognition of words, supporting the results in the literature. Finally, our results are in line with the view of left hemispheric superiority in lexical processing in monolingual speakers.


2003 ◽  
Vol 15 (3) ◽  
pp. 354-363 ◽  
Author(s):  
Michal Lavidor ◽  
Vincent Walsh

The split-fovea theory proposes that visual word recognition is mediated by the splitting of the foveal image, with letters to the left of fixation projected to the right hemisphere (RH) and letters to the right of fixation projected to the left hemisphere (LH). We applied repetitive transcranial magnetic stimulation (rTMS) over the left and right occipital cortex during a lexical decision task to investigate the extent to which word recognition processes could be accounted for according to the split-fovea theory. Unilateral rTMS significantly impaired lexical decision latencies to centrally presented words, supporting the suggestion that foveal representation of words is split between the cerebral hemispheres rather than bilateral. Behaviorally, we showed that words that have many orthographic neighbors sharing the same initial letters (“lead neighbors”) facilitated lexical decision more than words with few lead neighbors. This effect did not apply to end neighbors (orthographic neighbors sharing the same final letters). Crucially, rTMS over the RH impaired lead-, but not end-neighborhood facilitation. The results support the split-fovea theory, where the RH has primacy in representing lead neighbors of a written word.


2008 ◽  
Vol 19 (10) ◽  
pp. 998-1006 ◽  
Author(s):  
Janet Hui-wen Hsiao ◽  
Garrison Cottrell

It is well known that there exist preferred landing positions for eye fixations in visual word recognition. However, the existence of preferred landing positions in face recognition is less well established. It is also unknown how many fixations are required to recognize a face. To investigate these questions, we recorded eye movements during face recognition. During an otherwise standard face-recognition task, subjects were allowed a variable number of fixations before the stimulus was masked. We found that optimal recognition performance is achieved with two fixations; performance does not improve with additional fixations. The distribution of the first fixation is just to the left of the center of the nose, and that of the second fixation is around the center of the nose. Thus, these appear to be the preferred landing positions for face recognition. Furthermore, the fixations made during face learning differ in location from those made during face recognition and are also more variable in duration; this suggests that different strategies are used for face learning and face recognition.


2020 ◽  
Author(s):  
Xiaodong Liu ◽  
Luc Vermeylen ◽  
David Wisniewski ◽  
Marc Brysbaert

Lateralization is a critical characteristic of language production and also plays a role in visual word recognition. However, the neural mechanisms underlying the interactions between visual input and spoken word representations are still unclear. We investigated the contribution of sub-lexical phonological information in visual word processing by exploiting the fact that Chinese characters can contain phonetic radicals in either the left or right half of the character. FMRI data were collected while 39 Chinese participants read words in search for target color words. On the basis of whole-brain analysis and three laterality analyses of regions of interest, we argue that visual information from centrally presented Chinese characters is split in the fovea and projected to the contralateral visual cortex, from which phonological information can be extracted rapidly if the character contains a phonetic radical. Extra activation, suggestive of more effortful processing, is observed when the phonetic radical is situated in the left half of the character and therefore initially sent to the visual cortex in the right hemisphere that is less specialized for language processing. Our results are in line with the proposal that phonological information helps written word processing by means of top-down feedback.


2001 ◽  
Vol 55 (1) ◽  
pp. 13-26 ◽  
Author(s):  
J.W Peirce ◽  
A.E Leigh ◽  
A.P.C daCosta ◽  
K.M Kendrick

Author(s):  
Manuel Perea ◽  
Victoria Panadero

The vast majority of neural and computational models of visual-word recognition assume that lexical access is achieved via the activation of abstract letter identities. Thus, a word’s overall shape should play no role in this process. In the present lexical decision experiment, we compared word-like pseudowords like viotín (same shape as its base word: violín) vs. viocín (different shape) in mature (college-aged skilled readers), immature (normally reading children), and immature/impaired (young readers with developmental dyslexia) word-recognition systems. Results revealed similar response times (and error rates) to consistent-shape and inconsistent-shape pseudowords for both adult skilled readers and normally reading children – this is consistent with current models of visual-word recognition. In contrast, young readers with developmental dyslexia made significantly more errors to viotín-like pseudowords than to viocín-like pseudowords. Thus, unlike normally reading children, young readers with developmental dyslexia are sensitive to a word’s visual cues, presumably because of poor letter representations.


Author(s):  
Diane Pecher ◽  
Inge Boot ◽  
Saskia van Dantzig ◽  
Carol J. Madden ◽  
David E. Huber ◽  
...  

Previous studies (e.g., Pecher, Zeelenberg, & Wagenmakers, 2005) found that semantic classification performance is better for target words with orthographic neighbors that are mostly from the same semantic class (e.g., living) compared to target words with orthographic neighbors that are mostly from the opposite semantic class (e.g., nonliving). In the present study we investigated the contribution of phonology to orthographic neighborhood effects by comparing effects of phonologically congruent orthographic neighbors (book-hook) to phonologically incongruent orthographic neighbors (sand-wand). The prior presentation of a semantically congruent word produced larger effects on subsequent animacy decisions when the previously presented word was a phonologically congruent neighbor than when it was a phonologically incongruent neighbor. In a second experiment, performance differences between target words with versus without semantically congruent orthographic neighbors were larger if the orthographic neighbors were also phonologically congruent. These results support models of visual word recognition that assume an important role for phonology in cascaded access to meaning.


2012 ◽  
Author(s):  
Nicola Molinaro ◽  
Mikel Lizarazu ◽  
Jon Andoni Dunabeitia ◽  
Manuel Carreiras

Sign in / Sign up

Export Citation Format

Share Document