Investigating the Multimodal Nature of Human Communication

2009 ◽  
Vol 23 (2) ◽  
pp. 63-76 ◽  
Author(s):  
Silke Paulmann ◽  
Sarah Jessen ◽  
Sonja A. Kotz

The multimodal nature of human communication has been well established. Yet few empirical studies have systematically examined the widely held belief that this form of perception is facilitated in comparison to unimodal or bimodal perception. In the current experiment we first explored the processing of unimodally presented facial expressions. Furthermore, auditory (prosodic and/or lexical-semantic) information was presented together with the visual information to investigate the processing of bimodal (facial and prosodic cues) and multimodal (facial, lexic, and prosodic cues) human communication. Participants engaged in an identity identification task, while event-related potentials (ERPs) were being recorded to examine early processing mechanisms as reflected in the P200 and N300 component. While the former component has repeatedly been linked to physical property stimulus processing, the latter has been linked to more evaluative “meaning-related” processing. A direct relationship between P200 and N300 amplitude and the number of information channels present was found. The multimodal-channel condition elicited the smallest amplitude in the P200 and N300 components, followed by an increased amplitude in each component for the bimodal-channel condition. The largest amplitude was observed for the unimodal condition. These data suggest that multimodal information induces clear facilitation in comparison to unimodal or bimodal information. The advantage of multimodal perception as reflected in the P200 and N300 components may thus reflect one of the mechanisms allowing for fast and accurate information processing in human communication.

2006 ◽  
Vol 20 (1) ◽  
pp. 21-31 ◽  
Author(s):  
Bruce D. Dick ◽  
John F. Connolly ◽  
Michael E. Houlihan ◽  
Patrick J. McGrath ◽  
G. Allen Finley ◽  
...  

Abstract: Previous research has found that pain can exert a disruptive effect on cognitive processing. This experiment was conducted to extend previous research with participants with chronic pain. This report examines pain's effects on early processing of auditory stimulus differences using the Mismatch Negativity (MMN) in healthy participants while they experienced experimentally induced pain. Event-related potentials (ERPs) were recorded using target and standard tones whose pitch differences were easy- or difficult-to-detect in conditions where participants attended to (active attention) or ignored (passive attention) the stimuli. Both attention manipulations were conducted in no pain and pain conditions. Experimentally induced ischemic pain did not disrupt the MMN. However, MMN amplitudes were larger to difficult-to-detect deviant tones during painful stimulation when they were attended than when they were ignored. Also, MMN amplitudes were larger to the difficult- than to the easy-to-detect tones in the active attention condition regardless of pain condition. It appears that rather than exerting a disruptive effect, the presence of experimentally induced pain enhanced early processing of small stimulus differences in these healthy participants.


2015 ◽  
Vol 27 (3) ◽  
pp. 492-508 ◽  
Author(s):  
Nicholas E. Myers ◽  
Lena Walther ◽  
George Wallis ◽  
Mark G. Stokes ◽  
Anna C. Nobre

Working memory (WM) is strongly influenced by attention. In visual WM tasks, recall performance can be improved by an attention-guiding cue presented before encoding (precue) or during maintenance (retrocue). Although precues and retrocues recruit a similar frontoparietal control network, the two are likely to exhibit some processing differences, because precues invite anticipation of upcoming information whereas retrocues may guide prioritization, protection, and selection of information already in mind. Here we explored the behavioral and electrophysiological differences between precueing and retrocueing in a new visual WM task designed to permit a direct comparison between cueing conditions. We found marked differences in ERP profiles between the precue and retrocue conditions. In line with precues primarily generating an anticipatory shift of attention toward the location of an upcoming item, we found a robust lateralization in late cue-evoked potentials associated with target anticipation. Retrocues elicited a different pattern of ERPs that was compatible with an early selection mechanism, but not with stimulus anticipation. In contrast to the distinct ERP patterns, alpha-band (8–14 Hz) lateralization was indistinguishable between cue types (reflecting, in both conditions, the location of the cued item). We speculate that, whereas alpha-band lateralization after a precue is likely to enable anticipatory attention, lateralization after a retrocue may instead enable the controlled spatiotopic access to recently encoded visual information.


2008 ◽  
Vol 20 (3-4) ◽  
pp. 71-81 ◽  
Author(s):  
Stephanie L. Simon-Dack ◽  
P. Dennis Rodriguez ◽  
Wolfgang A. Teder-Sälejärvi

Imaging, transcranial magnetic stimulation, and psychophysiological recordings of the congenitally blind have confirmed functional activation of the visual cortex but have not extensively explained the functional significance of these activation patterns in detail. This review systematically examines research on the role of the visual cortex in processing spatial and non-visual information, highlighting research on individuals with early and late onset blindness. Here, we concentrate on the methods utilized in studying visual cortical activation in early blind participants, including positron emissions tomography (PET), functional magnetic resonance imaging (fMRI), transcranial magnetic stimulation (TMS), and electrophysiological data, specifically event-related potentials (ERPs). This paper summarizes and discusses findings of these studies. We hypothesize how mechanisms of cortical plasticity are expressed in congenitally in comparison to adventitiously blind and short-term visually deprived sighted participants and discuss potential approaches for further investigation of these mechanisms in future research.


2015 ◽  
Vol 45 (10) ◽  
pp. 2111-2122 ◽  
Author(s):  
W. Li ◽  
T. M. Lai ◽  
C. Bohon ◽  
S. K. Loo ◽  
D. McCurdy ◽  
...  

BackgroundAnorexia nervosa (AN) and body dysmorphic disorder (BDD) are characterized by distorted body image and are frequently co-morbid with each other, although their relationship remains little studied. While there is evidence of abnormalities in visual and visuospatial processing in both disorders, no study has directly compared the two. We used two complementary modalities – event-related potentials (ERPs) and functional magnetic resonance imaging (fMRI) – to test for abnormal activity associated with early visual signaling.MethodWe acquired fMRI and ERP data in separate sessions from 15 unmedicated individuals in each of three groups (weight-restored AN, BDD, and healthy controls) while they viewed images of faces and houses of different spatial frequencies. We used joint independent component analyses to compare activity in visual systems.ResultsAN and BDD groups demonstrated similar hypoactivity in early secondary visual processing regions and the dorsal visual stream when viewing low spatial frequency faces, linked to the N170 component, as well as in early secondary visual processing regions when viewing low spatial frequency houses, linked to the P100 component. Additionally, the BDD group exhibited hyperactivity in fusiform cortex when viewing high spatial frequency houses, linked to the N170 component. Greater activity in this component was associated with lower attractiveness ratings of faces.ConclusionsResults provide preliminary evidence of similar abnormal spatiotemporal activation in AN and BDD for configural/holistic information for appearance- and non-appearance-related stimuli. This suggests a common phenotype of abnormal early visual system functioning, which may contribute to perceptual distortions.


2008 ◽  
Vol 20 (7) ◽  
pp. 1235-1249 ◽  
Author(s):  
Roel M. Willems ◽  
Aslı Özyürek ◽  
Peter Hagoort

Understanding language always occurs within a situational context and, therefore, often implies combining streams of information from different domains and modalities. One such combination is that of spoken language and visual information, which are perceived together in a variety of ways during everyday communication. Here we investigate whether and how words and pictures differ in terms of their neural correlates when they are integrated into a previously built-up sentence context. This is assessed in two experiments looking at the time course (measuring event-related potentials, ERPs) and the locus (using functional magnetic resonance imaging, fMRI) of this integration process. We manipulated the ease of semantic integration of word and/or picture to a previous sentence context to increase the semantic load of processing. In the ERP study, an increased semantic load led to an N400 effect which was similar for pictures and words in terms of latency and amplitude. In the fMRI study, we found overlapping activations to both picture and word integration in the left inferior frontal cortex. Specific activations for the integration of a word were observed in the left superior temporal cortex. We conclude that despite obvious differences in representational format, semantic information coming from pictures and words is integrated into a sentence context in similar ways in the brain. This study adds to the growing insight that the language system incorporates (semantic) information coming from linguistic and extralinguistic domains with the same neural time course and by recruitment of overlapping brain areas.


2019 ◽  
Author(s):  
Ronja Demel ◽  
Michael Waldmann ◽  
Annekathrin Schacht

AbstractThe influence of emotion on moral judgments has become increasingly prominent in recent years. While explicit normative measures are widely used to investigate this relationship, event-related potentials (ERPs) offer the advantage of a preconscious method to visualize the modulation of moral judgments. Based on Gray and Wegner’s (2009) Dimensional Moral Model, the present study investigated whether the processing of neutral faces is modulated by moral context information. We hypothesized that neutral faces gain emotional valence when presented in a moral context and thus elicit ERP responses comparable to those established for the processing of emotional faces. Participants (N= 26, 13 female) were tested with regard to their implicit (ERPs) and explicit (morality rating) responses to neutral faces, shown in either a morally positive, negative, or neutral context. Higher ERP amplitudes in early (P100, N170) and later (EPN, LPC) processing stages were expected for harmful/helpful scenarios compared to neutral scenarios. Agents and patients were expected to differ for moral compared to neutral scenarios. In the explicit ratings neutral scenarios were expected to differ from moral scenarios. In ERPs, we found indications for an early modulation of moral valence (harmful/helpful) and an interaction of agency and moral valence after 80-120 ms. Later time sequences showed no significant differences. Morally positive and negative scenarios were rated as significantly different from neutral scenarios. Overall, the results indicate that the relationship of emotion and moral judgments can be observed on a preconscious neural level at an early processing stage as well as in explicit judgments.


2020 ◽  
Author(s):  
Alon Zivony ◽  
Dominique Lamy

Reporting the second of two targets is impaired when these appear in close succession, a phenomenon known as the attentional blink (AB). Despite decades of research, what mechanisms are affected by the AB remains unclear. Specifically, two central issues remain open: Does the AB disrupt attentional processes or reflect a structural limitation in working memory encoding? Does it disrupt perceptual processing or only post-perceptual processes? We address these questions by reviewing event-related potentials (ERP) studies of the AB. The findings reveal that the core influence of the AB is by disrupting attentional engagement (indexed by N2pc). As a consequence, while early processing (indexed by P1\N1) is spared, semantic processing (indexed by N400) and working memory (WM) encoding (indexed by P3b) are compromised: minor disruptions to attentional engagement weaken but do not eliminate semantic processing, whereas they prevent encoding in WM. Thus, semantic processing can survive the blink, whereas encoding in WM does not. To accommodate these conclusions, we suggest a Disrupted Engagement and Perception (DEaP) account of the attentional blink.


2018 ◽  
Author(s):  
Wiebke Hammerschmidt ◽  
Louisa Kulke ◽  
Christina BrÖring ◽  
Annekathrin Schacht

In comparison to neutral faces, facial expressions of emotion are known to gain attentional prioritization, mainly demonstrated by means of event-related potentials (ERPs). Recent evidence indicated that such a preferential processing can also be elicited by neutral faces when associated with increased motivational salience via reward. It remains, however, an open question whether impacts of inherent emotional salience and associated motivational salience might be integrated. To this aim, expressions and outcomes were orthogonally combined. Participants (N=42) learned to explicitly categorize happy and neutral faces as either reward- or zero-outcome-related via an associative learning paradigm. ERP components (P1, N170, EPN, and LPC) were measured throughout the experiment, and separately analyzed before (learning phase) and after (consolidation phase) reaching a pre-defined learning criterion. Happy facial expressions boosted early processing stages, as reflected in enhanced amplitudes of the N170 and EPN, both during learning and consolidation. In contrast, effects of monetary reward became evident only after successful learning and in form of enlarged amplitudes of the LPC, a component linked to higher-order evaluations. Interactions between expressions and associated outcome were absent in all ERP components of interest. The present study provides novel evidence that acquired salience impacts stimulus processing but independent of the effects driven by happy facial expressions.


2019 ◽  
Author(s):  
Sebastian Schindler ◽  
Maximilian Bruchmann ◽  
Bettina Gathmann ◽  
robert.moeck ◽  
thomas straube

Emotional facial expressions lead to modulations of early event-related potentials (ERPs). However, it has so far remained unclear in how far these modulations represent face-specific effects rather than differences in low-level visual features, and to which extent they depend on available processing resources. To examine these questions, we conducted two preregistered independent experiments (N = 40 in each experiment) using different variants of a novel task which manipulates peripheral perceptual load across levels but keeps overall visual stimulation constant. Centrally, task-irrelevant angry, neutral and happy faces and their Fourier phase-scrambled versions, which preserved low-level visual features, were presented. The results of both studies showed load-independent P1 and N170 emotion effects. Importantly, we could confirm by using Bayesian analyses that these emotion effects were face-independent for the P1 but not for the N170 component. We conclude that firstly, ERP modulations during the P1 interval strongly depend on low-level visual information, while the emotional N170 modulation requires the processing of figural facial features. Secondly, both P1 and N170 modulations appear to be immune to a large range of variations in perceptual load.


Sign in / Sign up

Export Citation Format

Share Document