scholarly journals Congruent lip movements facilitate speech processing in a dynamic audiovisual multi-talker scenario: An ERP study with older and younger adults

2020 ◽  
Author(s):  
Alexandra Begau ◽  
Laura-Isabelle Klatt ◽  
Edmund Wascher ◽  
Daniel Schneider ◽  
Stephan Getzmann

AbstractIn natural conversations, visible mouth and lip movements play an important role in speech comprehension. There is evidence that visual speech information improves speech comprehension, especially for older adults and under difficult listening conditions. However, the neurocognitive basis is still poorly understood. The present EEG experiment investigated the benefits of audiovisual speech in a dynamic cocktail-party scenario with 22 (aged 20 to 34 years) younger and 20 (aged 55 to 74 years) older participants. We presented three simultaneously talking faces with a varying amount of visual speech input (still faces, visually unspecific and audiovisually congruent). In a two-alternative forced-choice task, participants had to discriminate target words (“yes” or “no”) among two distractors (one-digit number words). In half of the experimental blocks, the target was always presented from a central position, in the other half, occasional switches to a lateral position could occur. We investigated behavioral and electrophysiological modulations due to age, location switches and the content of visual information, analyzing response times and accuracy as well as the P1, N1, P2, N2 event-related potentials (ERPs) and the contingent negative variation (CNV) in the EEG. We found that audiovisually congruent speech information improved performance and modulated ERP amplitudes in both age groups, suggesting enhanced preparation and integration of the subsequent auditory input. However, these benefits were only observed as long as no location switches occurred. To conclude, meaningful visual information in a multi-talker setting, when presented from the expected location, is shown to be beneficial for both younger and older adults.

2005 ◽  
Vol 17 (8) ◽  
pp. 1341-1352 ◽  
Author(s):  
Joseph B. Hopfinger ◽  
Anthony J. Ries

Recent studies have generated debate regarding whether reflexive attention mechanisms are triggered in a purely automatic stimulus-driven manner. Behavioral studies have found that a nonpredictive “cue” stimulus will speed manual responses to subsequent targets at the same location, but only if that cue is congruent with actively maintained top-down settings for target detection. When a cue is incongruent with top-down settings, response times are unaffected, and this has been taken as evidence that reflexive attention mechanisms were never engaged in those conditions. However, manual response times may mask effects on earlier stages of processing. Here, we used event-related potentials to investigate the interaction of bottom-up sensory-driven mechanisms and top-down control settings at multiple stages of processing in the brain. Our results dissociate sensory-driven mechanisms that automatically bias early stages of visual processing from later mechanisms that are contingent on top-down control. An early enhancement of target processing in the extrastriate visual cortex (i.e., the P1 component) was triggered by the appearance of a unique bright cue, regardless of top-down settings. The enhancement of visual processing was prolonged, however, when the cue was congruent with top-down settings. Later processing in posterior temporal-parietal regions (i.e., the ipsilateral invalid negativity) was triggered automatically when the cue consisted of the abrupt appearance of a single new object. However, in cases where more than a single object appeared during the cue display, this stage of processing was contingent on top-down control. These findings provide evidence that visual information processing is biased at multiple levels in the brain, and the results distinguish automatically triggered sensory-driven mechanisms from those that are contingent on top-down control settings.


Perception ◽  
1997 ◽  
Vol 26 (1_suppl) ◽  
pp. 252-252
Author(s):  
C Marendaz ◽  
C Robert ◽  
F Bonthoux

Neurophysiological (epigenetic specialisation of cortical areas) as well as behavioural (sign language, visual control of spatial surroundings) constraints suggest that deaf people should develop heightened abilities of processing parafoveal/peripheral visual information. Electrophysiological (visual event-related potentials) and psychophysical research using visual detection tasks on congenitally deaf adults corroborates this viewpoint (Neville, 1994 The Cognitive Neurosciences 219 – 231). The aim of this study was to examine whether this ability remains when the visual detection task requires a spatiotemporal organisation of attention. Forty congenitally bilaterally deaf (from a specialised institution) and sixty-four hearing subjects, subdivided into five age groups (from 7 years of age to young adults) performed four visual search tasks. The results showed that the younger deaf children performed dramatically worse than the aged-matched hearing children. This difference in performance between deaf and hearing children, however, disappeared at an age level of 11 years. Deaf adults did not perform significantly better than hearing adults. The data obtained in children have been replicated in a longitudinal study (re-test two years after). We are currently trying to determine which attentional mechanisms are more deficient in young deaf children (spatiotemporal organisation of search, engagement/disengagement of attention, etc) and what underlies the apparent amelioration of their deficit during development.


2017 ◽  
Vol 31 (2) ◽  
pp. 49-66 ◽  
Author(s):  
Eva-Maria Reuter ◽  
Claudia Voelcker-Rehage ◽  
Solveig Vieluf ◽  
Franca Parianen Lesemann ◽  
Ben Godde

Abstract. Older adults recruit relatively more frontal as compared to parietal resources in a variety of cognitive and perceptual tasks. It is not yet clear whether this parietal-to-frontal shift is a compensatory mechanism, or simply reflects a reduction in processing efficiency. In this study we aimed to investigate how the parietal-to-frontal shift with aging relates to selective attention. Fourteen young and 26 older healthy adults performed a color Flanker task under three conditions (incongruent, congruent, neutral) and event-related potentials (ERPs) were measured. The P3 was analyzed for the electrode positions Pz, Cz, and Fz as an indicator of the parietal-to-frontal shift. Further, behavioral performance and other ERP components (P1 and N1 at electrodes O1 and O2; N2 at electrodes Fz and Cz) were investigated. First young and older adults were compared. Older adults had longer response times, reduced accuracy, longer P3 latencies, and a more frontal distribution of P3 than young adults. These results confirm the parietal-to-frontal shift in the P3 with age for the selective attention task. Second, based on the differences between frontal and parietal P3 activity the group of older adults was subdivided into those showing a rather equal distribution of the P3 and older participants showing a strong frontal focus of the P3. Older adults with a more frontally distributed P3 had longer response times than participants with a more equally distributed P3. These results suggest that the frontally distributed P3 observed in older adults has no compensatory function in selective attention but rather indicates less efficient processing and slowing with age.


2020 ◽  
Vol 2020 ◽  
pp. 1-8
Author(s):  
Jinhua Xian ◽  
Yan Wang ◽  
Buxin Han

The effect of emotion on prospective memory on those of different age groups and its neural mechanism in Chinese adults are still unclear. The present study investigated the effect of emotion on prospective memory during the encoding and retrieval phases in younger and older adults by using event-related potentials (ERPs). In the behavioral results, a shorter response time was found for positive prospective memory cues only in older group. In the ERP results, during the encoding phase, an increased late positive potential (LPP) was found for negative prospective memory cues in younger adults, while the amplitude of the LPP was marginally greater for positive prospective memory cues than for negative prospective memory cues in older adults. Correspondingly, younger adults showed an increased parietal positivity for negative prospective memory cues, while an elevated parietal positivity for positive prospective memory cues was found in older adults during the retrieval phase. This finding reflects the increased attentional processing of encoding and the more cognitive resources recruited to carry out a set of processes that are associated with the realization of delayed intentions when the prospective memory cues are emotional. The results reveal the effect of emotion on prospective memory during the encoding and retrieval phases in Chinese adults, modulated by aging, as shown by a positivity effect on older adults and a negativity bias in younger adults.


2009 ◽  
Vol 23 (2) ◽  
pp. 63-76 ◽  
Author(s):  
Silke Paulmann ◽  
Sarah Jessen ◽  
Sonja A. Kotz

The multimodal nature of human communication has been well established. Yet few empirical studies have systematically examined the widely held belief that this form of perception is facilitated in comparison to unimodal or bimodal perception. In the current experiment we first explored the processing of unimodally presented facial expressions. Furthermore, auditory (prosodic and/or lexical-semantic) information was presented together with the visual information to investigate the processing of bimodal (facial and prosodic cues) and multimodal (facial, lexic, and prosodic cues) human communication. Participants engaged in an identity identification task, while event-related potentials (ERPs) were being recorded to examine early processing mechanisms as reflected in the P200 and N300 component. While the former component has repeatedly been linked to physical property stimulus processing, the latter has been linked to more evaluative “meaning-related” processing. A direct relationship between P200 and N300 amplitude and the number of information channels present was found. The multimodal-channel condition elicited the smallest amplitude in the P200 and N300 components, followed by an increased amplitude in each component for the bimodal-channel condition. The largest amplitude was observed for the unimodal condition. These data suggest that multimodal information induces clear facilitation in comparison to unimodal or bimodal information. The advantage of multimodal perception as reflected in the P200 and N300 components may thus reflect one of the mechanisms allowing for fast and accurate information processing in human communication.


Entropy ◽  
2021 ◽  
Vol 23 (3) ◽  
pp. 304
Author(s):  
Kelsey Cnudde ◽  
Sophia van Hees ◽  
Sage Brown ◽  
Gwen van der Wijk ◽  
Penny M. Pexman ◽  
...  

Visual word recognition is a relatively effortless process, but recent research suggests the system involved is malleable, with evidence of increases in behavioural efficiency after prolonged lexical decision task (LDT) performance. However, the extent of neural changes has yet to be characterized in this context. The neural changes that occur could be related to a shift from initially effortful performance that is supported by control-related processing, to efficient task performance that is supported by domain-specific processing. To investigate this, we replicated the British Lexicon Project, and had participants complete 16 h of LDT over several days. We recorded electroencephalography (EEG) at three intervals to track neural change during LDT performance and assessed event-related potentials and brain signal complexity. We found that response times decreased during LDT performance, and there was evidence of neural change through N170, P200, N400, and late positive component (LPC) amplitudes across the EEG sessions, which suggested a shift from control-related to domain-specific processing. We also found widespread complexity decreases alongside localized increases, suggesting that processing became more efficient with specific increases in processing flexibility. Together, these findings suggest that neural processing becomes more efficient and optimized to support prolonged LDT performance.


2015 ◽  
Vol 27 (3) ◽  
pp. 492-508 ◽  
Author(s):  
Nicholas E. Myers ◽  
Lena Walther ◽  
George Wallis ◽  
Mark G. Stokes ◽  
Anna C. Nobre

Working memory (WM) is strongly influenced by attention. In visual WM tasks, recall performance can be improved by an attention-guiding cue presented before encoding (precue) or during maintenance (retrocue). Although precues and retrocues recruit a similar frontoparietal control network, the two are likely to exhibit some processing differences, because precues invite anticipation of upcoming information whereas retrocues may guide prioritization, protection, and selection of information already in mind. Here we explored the behavioral and electrophysiological differences between precueing and retrocueing in a new visual WM task designed to permit a direct comparison between cueing conditions. We found marked differences in ERP profiles between the precue and retrocue conditions. In line with precues primarily generating an anticipatory shift of attention toward the location of an upcoming item, we found a robust lateralization in late cue-evoked potentials associated with target anticipation. Retrocues elicited a different pattern of ERPs that was compatible with an early selection mechanism, but not with stimulus anticipation. In contrast to the distinct ERP patterns, alpha-band (8–14 Hz) lateralization was indistinguishable between cue types (reflecting, in both conditions, the location of the cued item). We speculate that, whereas alpha-band lateralization after a precue is likely to enable anticipatory attention, lateralization after a retrocue may instead enable the controlled spatiotopic access to recently encoded visual information.


2019 ◽  
Vol 77 ◽  
pp. 20-25
Author(s):  
Cassandra Morrison ◽  
Farooq Kamal ◽  
Kenneth Campbell ◽  
Vanessa Taler

Sign in / Sign up

Export Citation Format

Share Document