ERP Research on Semantic Matching of Audio-visual Information in Interfaces in Quiet-Noise Situations

Author(s):  
Lingling Hu ◽  
Chengqi Xue ◽  
Junkai Shao
Author(s):  
Junkai Shao ◽  
Chengqi Xue

In this study, event-related potential (ERP) was used to examine whether the brain has an inhibition effect on the interference of audio-visual information in the Chinese interface. Concrete icons (flame and snowflake) or Chinese characters ([Formula: see text] and [Formula: see text]) with opposite semantics were used as target carriers, and colors (red and blue) and speeches ([Formula: see text] and [Formula: see text]) were used as audio-visual intervention stimuli. In the experiment, target carrier and audio-visual intervention were presented in a random combination, and the subjects needed to determine whether the semantics of the two matched quickly. By comparing the overall cognitive performance of two carriers, it was found that the brain had a more significant inhibition effect on audio-visual intervention stimuli with different semantics (SBH/LBH and SRC/LRC) relative to the same semantics (SRH/LRH). The semantic mismatch caused significant N400, indicating that semantic interference in the interface information would trigger the brain’s inhibition effect. Therefore, the more complex the semantic matching of interface information was, the higher the amplitude of N400 became. The results confirmed that the semantic relationship between target carrier and audio-visual intervention was the key factor affecting the cognitive inhibition effect. Moreover, under different intervention stimuli, the ERP’s negative activity caused by Chinese characters in frontal and parietal-occipital regions was more evident than that by concrete icons, indicating that concrete icons had a lower inhibition effect than Chinese characters. Therefore, we considered that this inhibition effect was based on the semantic constraints of the target carrier itself, which might come from the knowledge learning and intuitive experience stored in the human brain.


2009 ◽  
Vol 23 (2) ◽  
pp. 63-76 ◽  
Author(s):  
Silke Paulmann ◽  
Sarah Jessen ◽  
Sonja A. Kotz

The multimodal nature of human communication has been well established. Yet few empirical studies have systematically examined the widely held belief that this form of perception is facilitated in comparison to unimodal or bimodal perception. In the current experiment we first explored the processing of unimodally presented facial expressions. Furthermore, auditory (prosodic and/or lexical-semantic) information was presented together with the visual information to investigate the processing of bimodal (facial and prosodic cues) and multimodal (facial, lexic, and prosodic cues) human communication. Participants engaged in an identity identification task, while event-related potentials (ERPs) were being recorded to examine early processing mechanisms as reflected in the P200 and N300 component. While the former component has repeatedly been linked to physical property stimulus processing, the latter has been linked to more evaluative “meaning-related” processing. A direct relationship between P200 and N300 amplitude and the number of information channels present was found. The multimodal-channel condition elicited the smallest amplitude in the P200 and N300 components, followed by an increased amplitude in each component for the bimodal-channel condition. The largest amplitude was observed for the unimodal condition. These data suggest that multimodal information induces clear facilitation in comparison to unimodal or bimodal information. The advantage of multimodal perception as reflected in the P200 and N300 components may thus reflect one of the mechanisms allowing for fast and accurate information processing in human communication.


Author(s):  
Weiyu Zhang ◽  
Se-Hoon Jeong ◽  
Martin Fishbein†

This study investigates how multitasking interacts with levels of sexually explicit content to influence an individual’s ability to recognize TV content. A 2 (multitasking vs. nonmultitasking) by 3 (low, medium, and high sexual content) between-subjects experiment was conducted. The analyses revealed that multitasking not only impaired task performance, but also decreased TV recognition. An inverted-U relationship between degree of sexually explicit content and recognition of TV content was found, but only when subjects were multitasking. In addition, multitasking interfered with subjects’ ability to recognize audio information more than their ability to recognize visual information.


Sign in / Sign up

Export Citation Format

Share Document