scholarly journals Teaching methods shape neural tuning to visual words in beginning readers

2018 ◽  
Author(s):  
Alice van de Walle de Ghelcke ◽  
Bruno Rossion ◽  
Christine Schiltz ◽  
Aliette Lochy

AbstractThe impact of global vs. phonics teaching methods for reading on the emergence of left hemisphere neural specialization for word recognition is unknown in children. We tested 42 first graders behaviorally and with electroencephalography with Fast Periodic Visual Stimulation to measure selective neural responses to letter strings. Letter strings were inserted periodically (1/5) in pseudofonts in 40sec sequences displayed at 6Hz and were either words globally taught at school, eliciting visual whole-word form recognition (global method), or control words/pseudowords eliciting grapheme-phoneme mappings (phonic method). Selective responses (F/5, 1.2Hz) were left lateralized for control stimuli but bilateral for globally taught words, especially in poor readers. These results show that global method instruction induces activation in the right hemisphere, involved in holistic processing and visual object recognition, rather than in the specialized left hemisphere for reading. Poor readers, given their difficulties in automatizing grapheme-phoneme mappings, mostly rely on this alternative inadequate strategy.

Perception ◽  
1978 ◽  
Vol 7 (6) ◽  
pp. 695-705 ◽  
Author(s):  
Elizabeth K Warrington ◽  
Angela M Taylor

Visual object recognition was investigated in a group of eighty-one patients with right- or left-hemisphere lesions. Two tasks were used, one maximizing perceptual categorization by physical identity, the other maximizing semantic categorization by functional identity. The right-hemisphere group showed impairment on the perceptual categorization task and the left-hemisphere group were impaired on the semantic categorization task. The findings are discussed in terms of categorical stages of object recognition. A tentative model of their cerebral organization is suggested.


1980 ◽  
Vol 51 (1) ◽  
pp. 239-244 ◽  
Author(s):  
Hitoshi Honda

Inhibitory effects of S1 on the RT to S2 in double (visual-visual) stimulation situations were examined using 10 right-handed subjects, especially from the viewpoint of hemispheric input/output coupling. It was shown that the RT of the left hemisphere (right hand) to S2 after the projection of S1 into the right hemisphere was slower than the RTs under other conditions. The results were interpreted as showing an asymmetrical interhemispheric interfering effect in situations of double stimulation.


2021 ◽  
Vol 14 ◽  
Author(s):  
Jennifer Randerath ◽  
Lisa Finkel ◽  
Cheryl Shigaki ◽  
Joe Burris ◽  
Ashish Nanda ◽  
...  

The ability to judge accurately whether or not an action can be accomplished successfully is critical for selecting appropriate response options that enable adaptive behaviors. Such affordance judgments are thought to rely on the perceived fit between environmental properties and knowledge of one's current physical capabilities. Little, however, is currently known about the ability of individuals to judge their own affordances following a stroke, or about the underlying neural mechanisms involved. To address these issues, we employed a signal detection approach to investigate the impact of left or right hemisphere injuries on judgments of whether a visual object was located within reach while remaining still (i.e., reachability). Regarding perceptual sensitivity and accuracy in judging reachability, there were no significant group differences between healthy controls (N = 29), right brain damaged (RBD, N = 17) and left brain damaged stroke patients (LBD, N = 17). However, while healthy controls and RBD patients demonstrated a negative response criterion and thus overestimated their reach capability, LBD patients' average response criterion converged to zero, indicating no judgment tendency. Critically, the LBD group's judgment tendency pattern is consistent with previous findings in this same sample on an affordance judgment task that required estimating whether the hand can fit through apertures (Randerath et al., 2018). Lesion analysis suggests that this loss of judgment tendency may be associated with damage to the left insula, the left parietal and middle temporal lobe. Based on these results, we propose that damage to the left ventro-dorsal stream disrupts the retrieval and processing of a stable criterion, leading to stronger reliance on intact on-line body-perceptive processes computed within the preserved bilateral dorsal network.


Author(s):  
Tejas Rana

Various experiments or methods can be used for face recognition and detection however two of the main contain an experiment that evaluates the impact of facial landmark localization in the face recognition performance and the second experiment evaluates the impact of extracting the HOG from a regular grid and at multiple scales. We observe the question of feature sets for robust visual object recognition. The Histogram of Oriented Gradients outperform other existing methods like edge and gradient based descriptors. We observe the influence of each stage of the computation on performance, concluding that fine-scale gradients, relatively coarse spatial binning, fine orientation binning and high- quality local contrast normalization in overlapping descriptor patches are all important for good results. Comparative experiments show that though HOG is simple feature descriptor, the proposed HOG feature achieves good results with much lower computational time.


2018 ◽  
Vol 30 (3) ◽  
pp. 393-410 ◽  
Author(s):  
Genevieve Quek ◽  
Dan Nemrodov ◽  
Bruno Rossion ◽  
Joan Liu-Shuang

In daily life, efficient perceptual categorization of faces occurs in dynamic and highly complex visual environments. Yet the role of selective attention in guiding face categorization has predominantly been studied under sparse and static viewing conditions, with little focus on disentangling the impact of attentional enhancement and suppression. Here we show that attentional enhancement and suppression exert a differential impact on face categorization supported by the left and right hemispheres. We recorded 128-channel EEG while participants viewed a 6-Hz stream of object images (buildings, animals, objects, etc.) with a face image embedded as every fifth image (i.e., OOOOFOOOOFOOOOF…). We isolated face-selective activity by measuring the response at the face presentation frequency (i.e., 6 Hz/5 = 1.2 Hz) under three conditions: Attend Faces, in which participants monitored the sequence for instances of female faces; Attend Objects, in which they responded to instances of guitars; and Baseline, in which they performed an orthogonal task on the central fixation cross. During the orthogonal task, face-specific activity was predominantly centered over the right occipitotemporal region. Actively attending to faces enhanced face-selective activity much more evidently in the left hemisphere than in the right, whereas attending to objects suppressed the face-selective response in both hemispheres to a comparable extent. In addition, the time courses of attentional enhancement and suppression did not overlap. These results suggest the left and right hemispheres support face-selective processing in distinct ways—where the right hemisphere is mandatorily engaged by faces and the left hemisphere is more flexibly recruited to serve current tasks demands.


2004 ◽  
Vol 16 (7) ◽  
pp. 1250-1261 ◽  
Author(s):  
Takahiro Sekiguchi ◽  
Sachiko Koyama ◽  
Ryusuke Kakigi

Neuroimaging studies have reported that the left superior temporal cortical area is activated by visually presented words. In the present study, we recorded cortical magnetic responses evoked by visual words and examined the effect of phonological repetition (e.g., hair–hare) on left superior temporal cortical activity, using pairs of homophonic Japanese words as stimuli. Unlike English, Japanese has a large number of homophone pairs with a totally different orthography. By taking advantage of this feature of the Japanese writing system, the effect of phonological repetition can be solely examined without being confounded by the effect of orthographic similarity. Magnetic responses were recorded over the bilateral temporal sites of the brain while subjects silently read words. The words were presented one by one; a quarter of them was immediately followed by a homophonic word. Clear magnetic responses in the latency range of 300–600 msec were observed in the left hemisphere, and the responses to the homophones were smaller than those to the first presented words. In the right hemisphere, clear responses were not consistently recorded in the same latency range, and no effect of phonological repetition was observed. The sources of the responses recorded over the left hemisphere were estimated to be in the left superior temporal cortical area adjacent to the auditory cortex and the source strength as well as the magnetic responses showed a reduction by phonological repetition. This result suggests that the activity in the left superior temporal cortical area is associated with access to the phonological representation of words.


1995 ◽  
Vol 7 (4) ◽  
pp. 457-478 ◽  
Author(s):  
Argye E. Hillis ◽  
Alfonso Caramazza

We report detailed analyses of the performance of a patient, DHY, who as a consequence of strokes in the left occipital lobe and the periventricular white matter in the region of the spleniuni, showed severely impaired naming of visual stimuli despite spared recognition of visual stimuli and spared naming in other modalities. This pattern of performance—labeled “optic aphasia”—has been previously interpreted as support for the hypothesis that there are independent semantic systems, either a visual and a verbal semantic store (Beauvois, 1982; Lhermitte & Beauvois, 1973) or a right hemisphere and a left hemisphere semantic system (Coslett & Saffran, 1989, l092), which are “disconnected” in these patients. We provide evidence that DHY shows precisely the types of performance across a variety of verbal and visual tasks that have been used to support these claims of separate semantic systems: (1) good performance in naming to definition and naming objects presented for tactile exploration (which has been interpreted as evidence of spared verbal or left hemisphere semantic processing), and (2) good performance on various “semantic” tasks that do not require naming (which has been interpreted as access to spared visual or right hemisphere semantic processing). Nevertheless, when nonverbal semantic tasks were modified such that they required access to more detailed semantic information for accurate performance, DHY was Par less accurate, indicating that she did not access complete semantic information about objects in the visual modality. We argue that these data undermine the claim that cases of optic aphasia can be explained only by proposing multiple semantic systems. We propose an alternative account for this pattern of performance, within a model of visual object naming that specifies a single, modality-independent semantic system. We show that the performance of DHY and other “optic aphasic” patients can be explained by proposing a deficit in accessing a complete, modality-independent, lexical-semantic representation from an intact stored, structural description of the object. We discuss the implications of these conclusions for claims about the neuroanatomical correlates of semantic and visual object processing.


Perception ◽  
1986 ◽  
Vol 15 (3) ◽  
pp. 355-366 ◽  
Author(s):  
Elizabeth K Warrington ◽  
Merle James

An investigation is reported of the ability of normal subjects and patients with right-hemisphere lesions to identify 3-D shadow images of common objects from different viewpoints. Object recognition thresholds were measured in terms of angle of rotation (through the horizontal or vertical axis) required for correct identification. Effects of axial rotation were very variable and no evidence was found of a typical recognition threshold function relating angle of view to object identification. Although the right-hemisphere-lesion group was consistently and significantly worse than the control group, no qualitative differences between the groups were observed. The findings are discussed in relation to Marr's theory that the geometry of a 3-D shape is derived from axial information, and it is argued that the data reported are more consistent with a distinctive-features model of object recognition.


2013 ◽  
Vol 27 (3) ◽  
pp. 142-148 ◽  
Author(s):  
Konstantinos Trochidis ◽  
Emmanuel Bigand

The combined interactions of mode and tempo on emotional responses to music were investigated using both self-reports and electroencephalogram (EEG) activity. A musical excerpt was performed in three different modes and tempi. Participants rated the emotional content of the resulting nine stimuli and their EEG activity was recorded. Musical modes influence the valence of emotion with major mode being evaluated happier and more serene, than minor and locrian modes. In EEG frontal activity, major mode was associated with an increased alpha activation in the left hemisphere compared to minor and locrian modes, which, in turn, induced increased activation in the right hemisphere. The tempo modulates the arousal value of emotion with faster tempi associated with stronger feeling of happiness and anger and this effect is associated in EEG with an increase of frontal activation in the left hemisphere. By contrast, slow tempo induced decreased frontal activation in the left hemisphere. Some interactive effects were found between mode and tempo: An increase of tempo modulated the emotion differently depending on the mode of the piece.


2007 ◽  
Author(s):  
K. Suzanne Scherf ◽  
Marlene Behrmann ◽  
Kate Humphreys ◽  
Beatriz Luna

Sign in / Sign up

Export Citation Format

Share Document