Deafness and Attentional Visual Search: A Developmental Study

Perception ◽  
1997 ◽  
Vol 26 (1_suppl) ◽  
pp. 252-252
Author(s):  
C Marendaz ◽  
C Robert ◽  
F Bonthoux

Neurophysiological (epigenetic specialisation of cortical areas) as well as behavioural (sign language, visual control of spatial surroundings) constraints suggest that deaf people should develop heightened abilities of processing parafoveal/peripheral visual information. Electrophysiological (visual event-related potentials) and psychophysical research using visual detection tasks on congenitally deaf adults corroborates this viewpoint (Neville, 1994 The Cognitive Neurosciences 219 – 231). The aim of this study was to examine whether this ability remains when the visual detection task requires a spatiotemporal organisation of attention. Forty congenitally bilaterally deaf (from a specialised institution) and sixty-four hearing subjects, subdivided into five age groups (from 7 years of age to young adults) performed four visual search tasks. The results showed that the younger deaf children performed dramatically worse than the aged-matched hearing children. This difference in performance between deaf and hearing children, however, disappeared at an age level of 11 years. Deaf adults did not perform significantly better than hearing adults. The data obtained in children have been replicated in a longitudinal study (re-test two years after). We are currently trying to determine which attentional mechanisms are more deficient in young deaf children (spatiotemporal organisation of search, engagement/disengagement of attention, etc) and what underlies the apparent amelioration of their deficit during development.

Perception ◽  
10.1068/p5036 ◽  
2003 ◽  
Vol 32 (4) ◽  
pp. 485-497 ◽  
Author(s):  
J Bernard Netelenbos ◽  
Geert J P Savelsbergh

The localisation time of visual targets within and beyond the field of view and the relative timing of the onsets of eye and head movements were examined in deaf and hearing children of two age groups: 5 – 7 years and 10 – 12 years. Compared to their hearing peers, the deaf children showed more often a mode of eye – head coordination in which the head leads the eye. The discrepancy between the onsets of eye and head movements were greater for the younger than for the older groups. Furthermore, the deaf children took more time than the hearing children to localise the targets; especially the young deaf differed from their hearing contemporaries. These findings support the view that during development the differences in visual search between deaf and hearing children decrease. The results are discussed in the context of a distinction between representational and sensorimotor control of eye – head responses.


2020 ◽  
Author(s):  
Alexandra Begau ◽  
Laura-Isabelle Klatt ◽  
Edmund Wascher ◽  
Daniel Schneider ◽  
Stephan Getzmann

AbstractIn natural conversations, visible mouth and lip movements play an important role in speech comprehension. There is evidence that visual speech information improves speech comprehension, especially for older adults and under difficult listening conditions. However, the neurocognitive basis is still poorly understood. The present EEG experiment investigated the benefits of audiovisual speech in a dynamic cocktail-party scenario with 22 (aged 20 to 34 years) younger and 20 (aged 55 to 74 years) older participants. We presented three simultaneously talking faces with a varying amount of visual speech input (still faces, visually unspecific and audiovisually congruent). In a two-alternative forced-choice task, participants had to discriminate target words (“yes” or “no”) among two distractors (one-digit number words). In half of the experimental blocks, the target was always presented from a central position, in the other half, occasional switches to a lateral position could occur. We investigated behavioral and electrophysiological modulations due to age, location switches and the content of visual information, analyzing response times and accuracy as well as the P1, N1, P2, N2 event-related potentials (ERPs) and the contingent negative variation (CNV) in the EEG. We found that audiovisually congruent speech information improved performance and modulated ERP amplitudes in both age groups, suggesting enhanced preparation and integration of the subsequent auditory input. However, these benefits were only observed as long as no location switches occurred. To conclude, meaningful visual information in a multi-talker setting, when presented from the expected location, is shown to be beneficial for both younger and older adults.


2009 ◽  
Vol 23 (2) ◽  
pp. 63-76 ◽  
Author(s):  
Silke Paulmann ◽  
Sarah Jessen ◽  
Sonja A. Kotz

The multimodal nature of human communication has been well established. Yet few empirical studies have systematically examined the widely held belief that this form of perception is facilitated in comparison to unimodal or bimodal perception. In the current experiment we first explored the processing of unimodally presented facial expressions. Furthermore, auditory (prosodic and/or lexical-semantic) information was presented together with the visual information to investigate the processing of bimodal (facial and prosodic cues) and multimodal (facial, lexic, and prosodic cues) human communication. Participants engaged in an identity identification task, while event-related potentials (ERPs) were being recorded to examine early processing mechanisms as reflected in the P200 and N300 component. While the former component has repeatedly been linked to physical property stimulus processing, the latter has been linked to more evaluative “meaning-related” processing. A direct relationship between P200 and N300 amplitude and the number of information channels present was found. The multimodal-channel condition elicited the smallest amplitude in the P200 and N300 components, followed by an increased amplitude in each component for the bimodal-channel condition. The largest amplitude was observed for the unimodal condition. These data suggest that multimodal information induces clear facilitation in comparison to unimodal or bimodal information. The advantage of multimodal perception as reflected in the P200 and N300 components may thus reflect one of the mechanisms allowing for fast and accurate information processing in human communication.


Author(s):  
Karen Emmorey

Recent neuroimaging and electrophysiological studies reveal how the reading system successfully adapts when phonological codes are relatively coarse-grained due to reduced auditory input during development. New evidence suggests that the optimal end-state for the reading system may differ for deaf versus hearing adults and indicates that certain neural patterns that are maladaptive for hearing readers may be beneficial for deaf readers. This chapter focuses on deaf adults who are signers and have achieved reading success. Although the left-hemisphere-dominant reading circuit is largely similar in both deaf and hearing individuals, skilled deaf readers exhibit a more bilateral neural response to written words and sentences than their hearing peers, as measured by event-related potentials and functional magnetic resonance imaging. Skilled deaf readers may also rely more on neural regions involved in semantic processing than hearing readers do. Overall, emerging evidence indicates that the neural markers for reading skill may differ for deaf and hearing adults.


2015 ◽  
Vol 27 (3) ◽  
pp. 492-508 ◽  
Author(s):  
Nicholas E. Myers ◽  
Lena Walther ◽  
George Wallis ◽  
Mark G. Stokes ◽  
Anna C. Nobre

Working memory (WM) is strongly influenced by attention. In visual WM tasks, recall performance can be improved by an attention-guiding cue presented before encoding (precue) or during maintenance (retrocue). Although precues and retrocues recruit a similar frontoparietal control network, the two are likely to exhibit some processing differences, because precues invite anticipation of upcoming information whereas retrocues may guide prioritization, protection, and selection of information already in mind. Here we explored the behavioral and electrophysiological differences between precueing and retrocueing in a new visual WM task designed to permit a direct comparison between cueing conditions. We found marked differences in ERP profiles between the precue and retrocue conditions. In line with precues primarily generating an anticipatory shift of attention toward the location of an upcoming item, we found a robust lateralization in late cue-evoked potentials associated with target anticipation. Retrocues elicited a different pattern of ERPs that was compatible with an early selection mechanism, but not with stimulus anticipation. In contrast to the distinct ERP patterns, alpha-band (8–14 Hz) lateralization was indistinguishable between cue types (reflecting, in both conditions, the location of the cued item). We speculate that, whereas alpha-band lateralization after a precue is likely to enable anticipatory attention, lateralization after a retrocue may instead enable the controlled spatiotopic access to recently encoded visual information.


Perception ◽  
1996 ◽  
Vol 25 (1_suppl) ◽  
pp. 147-147
Author(s):  
P Stivalet ◽  
Y Moreno ◽  
C Cian ◽  
J Richard ◽  
P-A Barraud

In a visual search paradigm we measured the stimulus onset asynchrony (SOA) between a stimulus and a mask that was required to reach 90% correct responses. This procedure has the advantage of taking into account the real processing time and excluding the time for the generation of the motor response. Twelve congenitally deaf adult subjects and twelve normal subjects were given a visual search task for a target letter O among a varying number of distractor letters Q and vice-versa. In both groups we found the asymmetrical visual search pattern classically observed with parallel processing for the search for the target Q and with serial processing for the search for the target O (Treisman, 1985 Computer Vision, Graphics, and Image Processing31 156 – 177). The difference between the mean search slopes for an O target was not statistically significant between the groups; this might be due to the variability within the groups. The visual search amidst the congenitally deaf does not seem to benefit from a compensatory effect in relation to the acoustic deprivation. Our results seem to confirm data reported by Neville (1990 Annals of the New York Academy of Science 71 – 91) obtained by an electrophysiological technique based on event-related potentials. Nevertheless, the deaf subjects were 2.5 times faster at the visual search task.


2008 ◽  
Vol 20 (3-4) ◽  
pp. 71-81 ◽  
Author(s):  
Stephanie L. Simon-Dack ◽  
P. Dennis Rodriguez ◽  
Wolfgang A. Teder-Sälejärvi

Imaging, transcranial magnetic stimulation, and psychophysiological recordings of the congenitally blind have confirmed functional activation of the visual cortex but have not extensively explained the functional significance of these activation patterns in detail. This review systematically examines research on the role of the visual cortex in processing spatial and non-visual information, highlighting research on individuals with early and late onset blindness. Here, we concentrate on the methods utilized in studying visual cortical activation in early blind participants, including positron emissions tomography (PET), functional magnetic resonance imaging (fMRI), transcranial magnetic stimulation (TMS), and electrophysiological data, specifically event-related potentials (ERPs). This paper summarizes and discusses findings of these studies. We hypothesize how mechanisms of cortical plasticity are expressed in congenitally in comparison to adventitiously blind and short-term visually deprived sighted participants and discuss potential approaches for further investigation of these mechanisms in future research.


Sign in / Sign up

Export Citation Format

Share Document