scholarly journals Seeing and Hearing Meaning: ERP and fMRI Evidence of Word versus Picture Integration into a Sentence Context

2008 ◽  
Vol 20 (7) ◽  
pp. 1235-1249 ◽  
Author(s):  
Roel M. Willems ◽  
Aslı Özyürek ◽  
Peter Hagoort

Understanding language always occurs within a situational context and, therefore, often implies combining streams of information from different domains and modalities. One such combination is that of spoken language and visual information, which are perceived together in a variety of ways during everyday communication. Here we investigate whether and how words and pictures differ in terms of their neural correlates when they are integrated into a previously built-up sentence context. This is assessed in two experiments looking at the time course (measuring event-related potentials, ERPs) and the locus (using functional magnetic resonance imaging, fMRI) of this integration process. We manipulated the ease of semantic integration of word and/or picture to a previous sentence context to increase the semantic load of processing. In the ERP study, an increased semantic load led to an N400 effect which was similar for pictures and words in terms of latency and amplitude. In the fMRI study, we found overlapping activations to both picture and word integration in the left inferior frontal cortex. Specific activations for the integration of a word were observed in the left superior temporal cortex. We conclude that despite obvious differences in representational format, semantic information coming from pictures and words is integrated into a sentence context in similar ways in the brain. This study adds to the growing insight that the language system incorporates (semantic) information coming from linguistic and extralinguistic domains with the same neural time course and by recruitment of overlapping brain areas.

2019 ◽  
Vol 40 (6) ◽  
pp. 1481-1494
Author(s):  
Zude Zhu ◽  
Suiping Wang ◽  
Nannan Xu ◽  
Mengya Li ◽  
Yiming Yang

AbstractSemantic integration and working memory both decline with age. However, it remains unclear whether the semantic integration decline is independent of working memory decline or whether it can be solely explained by the latter factor. In this event-related potentials experiment, 43 younger adults and 43 cognitively healthy older adults read semantically congruent and incongruent sentences. After controlling for working memory, behavioral accuracy was significantly lower in the older adults than in the younger adults. In addition, the semantic integration related N400 effect (incongruent vs. congruent) for correct trials was apparent in the whole brain in the younger adults but restricted to the posterior region in the older adults. The results clarify the relationship between working memory and semantic integration, and clearly demonstrate that semantic integration decline is independent of working memory decline during aging.


2003 ◽  
Vol 4 (1) ◽  
pp. 63-80 ◽  
Author(s):  
Michela Balconi ◽  
Uberto Pozzoli

ERPs (event-related potentials) correlates are largely used in cognitive psychology and specifically for analysis of semantic information processing. Previous research has underlined a strong correlation between a negative-ongoing wave (N400), more frontally distributed, and semantic linguistic or extra-linguistic anomalies. With reference to the extra-linguistic domain, our experiment analyzed ERP variation in a semantic task of comprehension of emotional facial expressions. The experiment explored the effect of expectancy violation when subjects observed congruous or incongruous emotional facial patterns. Four prototypical (anger, sadness, happiness and surprise) and four morphed faces were presented. Moreover, two distinct cognitive tasks (an implicit vs an explicit elaboration) were analyzed in order to evaluate the influence of spontaneous decoding in N400-like effects. An automatic, high-order cognitive process was found, elicited by a negative ERP variation similar to the linguistic N400 effect, which allows us to explain the congruous/incongruous decoding in semantic extra-linguistic comprehension.


2018 ◽  
Vol 30 (4) ◽  
pp. 498-513 ◽  
Author(s):  
Christian Graulty ◽  
Orestis Papaioannou ◽  
Phoebe Bauer ◽  
Michael A. Pitts ◽  
Enriqueta Canseco-Gonzalez

In auditory–visual sensory substitution, visual information (e.g., shape) can be extracted through strictly auditory input (e.g., soundscapes). Previous studies have shown that image-to-sound conversions that follow simple rules [such as the Meijer algorithm; Meijer, P. B. L. An experimental system for auditory image representation. Transactions on Biomedical Engineering, 39, 111–121, 1992] are highly intuitive and rapidly learned by both blind and sighted individuals. A number of recent fMRI studies have begun to explore the neuroplastic changes that result from sensory substitution training. However, the time course of cross-sensory information transfer in sensory substitution is largely unexplored and may offer insights into the underlying neural mechanisms. In this study, we recorded ERPs to soundscapes before and after sighted participants were trained with the Meijer algorithm. We compared these posttraining versus pretraining ERP differences with those of a control group who received the same set of 80 auditory/visual stimuli but with arbitrary pairings during training. Our behavioral results confirmed the rapid acquisition of cross-sensory mappings, and the group trained with the Meijer algorithm was able to generalize their learning to novel soundscapes at impressive levels of accuracy. The ERP results revealed an early cross-sensory learning effect (150–210 msec) that was significantly enhanced in the algorithm-trained group compared with the control group as well as a later difference (420–480 msec) that was unique to the algorithm-trained group. These ERP modulations are consistent with previous fMRI results and provide additional insight into the time course of cross-sensory information transfer in sensory substitution.


2017 ◽  
Author(s):  
Mikio Inagaki ◽  
Ichiro Fujita

SummaryThe amygdala plays a critical role in detecting potential danger through sensory input [1, 2]. In the primate visual system, a subcortical pathway through the superior colliculus and the pulvinar is thought to provide the amygdala with rapid and coarse visual information about facial emotions [3–6]. A recent electrophysiological study in human patients supported this hypothesis by showing that intracranial event-related potentials discriminated fearful faces from other faces very quickly (within ∼74 ms) [7]. However, several aspects of the hypothesis remain debatable [8]. Critically, evidence for short-latency, emotion-selective responses from individual amygdala neurons is lacking [9–12], and even if this type of response existed, how it might contribute to stimulus detection is unclear. Here, we addressed these issues in the monkey amygdala and found that ensemble responses of single neurons carry robust information about emotional faces— especially threatening ones—within ∼50 ms after stimulus onset. Similar rapid response was not found in the temporal cortex from which the amygdala receives cortical inputs [13], suggesting a subcortical origin. Additionally, we found that the rapid amygdala response contained excitatory and suppressive components. The early excitatory component might be useful for quickly sending signals to downstream areas. In contrast, the rapid suppressive component sharpened the rising phase of later, sustained excitatory input (presumably from the temporal cortex) and might therefore improve processing of emotional faces over time. We thus propose that these two amygdala responses that originate from the subcortical pathway play dual roles in threat detection.


2009 ◽  
Vol 23 (2) ◽  
pp. 63-76 ◽  
Author(s):  
Silke Paulmann ◽  
Sarah Jessen ◽  
Sonja A. Kotz

The multimodal nature of human communication has been well established. Yet few empirical studies have systematically examined the widely held belief that this form of perception is facilitated in comparison to unimodal or bimodal perception. In the current experiment we first explored the processing of unimodally presented facial expressions. Furthermore, auditory (prosodic and/or lexical-semantic) information was presented together with the visual information to investigate the processing of bimodal (facial and prosodic cues) and multimodal (facial, lexic, and prosodic cues) human communication. Participants engaged in an identity identification task, while event-related potentials (ERPs) were being recorded to examine early processing mechanisms as reflected in the P200 and N300 component. While the former component has repeatedly been linked to physical property stimulus processing, the latter has been linked to more evaluative “meaning-related” processing. A direct relationship between P200 and N300 amplitude and the number of information channels present was found. The multimodal-channel condition elicited the smallest amplitude in the P200 and N300 components, followed by an increased amplitude in each component for the bimodal-channel condition. The largest amplitude was observed for the unimodal condition. These data suggest that multimodal information induces clear facilitation in comparison to unimodal or bimodal information. The advantage of multimodal perception as reflected in the P200 and N300 components may thus reflect one of the mechanisms allowing for fast and accurate information processing in human communication.


2015 ◽  
Vol 27 (3) ◽  
pp. 492-508 ◽  
Author(s):  
Nicholas E. Myers ◽  
Lena Walther ◽  
George Wallis ◽  
Mark G. Stokes ◽  
Anna C. Nobre

Working memory (WM) is strongly influenced by attention. In visual WM tasks, recall performance can be improved by an attention-guiding cue presented before encoding (precue) or during maintenance (retrocue). Although precues and retrocues recruit a similar frontoparietal control network, the two are likely to exhibit some processing differences, because precues invite anticipation of upcoming information whereas retrocues may guide prioritization, protection, and selection of information already in mind. Here we explored the behavioral and electrophysiological differences between precueing and retrocueing in a new visual WM task designed to permit a direct comparison between cueing conditions. We found marked differences in ERP profiles between the precue and retrocue conditions. In line with precues primarily generating an anticipatory shift of attention toward the location of an upcoming item, we found a robust lateralization in late cue-evoked potentials associated with target anticipation. Retrocues elicited a different pattern of ERPs that was compatible with an early selection mechanism, but not with stimulus anticipation. In contrast to the distinct ERP patterns, alpha-band (8–14 Hz) lateralization was indistinguishable between cue types (reflecting, in both conditions, the location of the cued item). We speculate that, whereas alpha-band lateralization after a precue is likely to enable anticipatory attention, lateralization after a retrocue may instead enable the controlled spatiotopic access to recently encoded visual information.


2015 ◽  
Vol 27 (5) ◽  
pp. 1017-1028 ◽  
Author(s):  
Paul Metzner ◽  
Titus von der Malsburg ◽  
Shravan Vasishth ◽  
Frank Rösler

Recent research has shown that brain potentials time-locked to fixations in natural reading can be similar to brain potentials recorded during rapid serial visual presentation (RSVP). We attempted two replications of Hagoort, Hald, Bastiaansen, and Petersson [Hagoort, P., Hald, L., Bastiaansen, M., & Petersson, K. M. Integration of word meaning and world knowledge in language comprehension. Science, 304, 438–441, 2004] to determine whether this correspondence also holds for oscillatory brain responses. Hagoort et al. reported an N400 effect and synchronization in the theta and gamma range following world knowledge violations. Our first experiment (n = 32) used RSVP and replicated both the N400 effect in the ERPs and the power increase in the theta range in the time–frequency domain. In the second experiment (n = 49), participants read the same materials freely while their eye movements and their EEG were monitored. First fixation durations, gaze durations, and regression rates were increased, and the ERP showed an N400 effect. An analysis of time–frequency representations showed synchronization in the delta range (1–3 Hz) and desynchronization in the upper alpha range (11–13 Hz) but no theta or gamma effects. The results suggest that oscillatory EEG changes elicited by world knowledge violations are different in natural reading and RSVP. This may reflect differences in how representations are constructed and retrieved from memory in the two presentation modes.


2007 ◽  
Vol 19 (10) ◽  
pp. 1595-1608 ◽  
Author(s):  
Leanne M. Williams ◽  
Andrew H. Kemp ◽  
Kim Felmingham ◽  
Belinda J. Liddell ◽  
Donna M. Palmer ◽  
...  

Although biases toward signals of fear may be an evolutionary adaptation necessary for survival, heightened biases may be maladaptive and associated with anxiety or depression. In this study, event-related potentials (ERPs) were used to examine the time course of neural responses to facial fear stimuli (versus neutral) presented overtly (for 500 msec with conscious attention) and covertly (for 10 msec with immediate masking to preclude conscious awareness) in 257 nonclinical subjects. We also examined the impact of trait anxiety and depression, assessed using psychometric ratings, on the time course of ERPs. In the total subject group, controlled biases to overtly processed fear were reflected in an enhancement of ERPs associated with structural encoding (120–220 msec) and sustained evaluation persisting from 250 msec and beyond, following a temporo-occipital to frontal topography. By contrast, covert fear processing elicited automatic biases, reflected in an enhancement of ERPs prior to structural encoding (80–180 msec) and again in the period associated with automatic orienting and emotion encoding (230–330 msec), which followed the reverse frontal to temporo-occipital topography. Higher levels of trait anxiety (in the clinical range) were distinguished by a heightened bias to covert fear (speeding of early ERPs), compared to higher depression which was associated with an opposing bias to overt fear (slowing of later ERPs). Anxiety also heightened early responses to covert fear, and depression to overt fear, with subsequent deficits in emotion encoding in each case. These findings are consistent with neural biases to signals of fear which operate automatically and during controlled processing, feasibly supported by parallel networks. Heightened automatic biases in anxiety may contribute to a cycle of hypervigilance and anxious thoughts, whereas depression may represent a “burnt out” emotional state in which evaluation of fear stimuli is prolonged only when conscious attention is allocated.


Sign in / Sign up

Export Citation Format

Share Document