Inhibition Effect of Audio-visual Semantic Interference in Chinese Interface: An ERP Study of Concrete Icons and Chinese Characters

Author(s):  
Junkai Shao ◽  
Chengqi Xue

In this study, event-related potential (ERP) was used to examine whether the brain has an inhibition effect on the interference of audio-visual information in the Chinese interface. Concrete icons (flame and snowflake) or Chinese characters ([Formula: see text] and [Formula: see text]) with opposite semantics were used as target carriers, and colors (red and blue) and speeches ([Formula: see text] and [Formula: see text]) were used as audio-visual intervention stimuli. In the experiment, target carrier and audio-visual intervention were presented in a random combination, and the subjects needed to determine whether the semantics of the two matched quickly. By comparing the overall cognitive performance of two carriers, it was found that the brain had a more significant inhibition effect on audio-visual intervention stimuli with different semantics (SBH/LBH and SRC/LRC) relative to the same semantics (SRH/LRH). The semantic mismatch caused significant N400, indicating that semantic interference in the interface information would trigger the brain’s inhibition effect. Therefore, the more complex the semantic matching of interface information was, the higher the amplitude of N400 became. The results confirmed that the semantic relationship between target carrier and audio-visual intervention was the key factor affecting the cognitive inhibition effect. Moreover, under different intervention stimuli, the ERP’s negative activity caused by Chinese characters in frontal and parietal-occipital regions was more evident than that by concrete icons, indicating that concrete icons had a lower inhibition effect than Chinese characters. Therefore, we considered that this inhibition effect was based on the semantic constraints of the target carrier itself, which might come from the knowledge learning and intuitive experience stored in the human brain.

Author(s):  
Hamid Karimi-Rouzbahani ◽  
Mozhgan Shahmohammadi ◽  
Ehsan Vahab ◽  
Saeed Setayeshi ◽  
Thomas Carlson

AbstractHumans are remarkably efficent at recognizing objects. Understanding how the brain performs object recognition has been challenging. Our understanding has been advanced substantially in recent years with the development of multivariate decoding methods. Most start-of-the-art decoding procedures, make use of the ‘mean’ neural activation to extract object category information, which overlooks temporal variability in the signals. Here, we studied category-related information in 30 mathematically distinct features from electroencephalography (EEG) across three independent and highly-varied datasets using multivariate decoding. While the event-related potential (ERP) components of N1 and P2a were among the most informative features, the informative original signal samples and Wavelet coefficients, selected through principal component analysis, outperformed them. The four mentioned informative features showed more pronounced decoding in the Theta frequency band, which has been suggested to support feed-forward processing of visual information in the brain. Correlational analyses showed that the features, which were most informative about object categories, could predict participants’ behavioral performance (reaction time) more accurately than the less informative features. These results suggest a new approach for studying how the human brain encodes object category information and how we can read them out more optimally to investigate the temporal dynamics of the neural code. The codes are available online at https://osf.io/wbvpn/.


2021 ◽  
Vol 11 (7) ◽  
pp. 2987
Author(s):  
Takumi Okumura ◽  
Yuichi Kurita

Image therapy, which creates illusions with a mirror and a head mount display, assists movement relearning in stroke patients. Mirror therapy presents the movement of the unaffected limb in a mirror, creating the illusion of movement of the affected limb. As the visual information of images cannot create a fully immersive experience, we propose a cross-modal strategy that supplements the image with sensual information. By interacting with the stimuli received from multiple sensory organs, the brain complements missing senses, and the patient experiences a different sense of motion. Our system generates the sense of stair-climbing in a subject walking on a level floor. The force sensation is presented by a pneumatic gel muscle (PGM). Based on motion analysis in a human lower-limb model and the characteristics of the force exerted by the PGM, we set the appropriate air pressure of the PGM. The effectiveness of the proposed system was evaluated by surface electromyography and a questionnaire. The experimental results showed that by synchronizing the force sensation with visual information, we could match the motor and perceived sensations at the muscle-activity level, enhancing the sense of stair-climbing. The experimental results showed that the visual condition significantly improved the illusion intensity during stair-climbing.


2021 ◽  
Vol 11 (8) ◽  
pp. 3397
Author(s):  
Gustavo Assunção ◽  
Nuno Gonçalves ◽  
Paulo Menezes

Human beings have developed fantastic abilities to integrate information from various sensory sources exploring their inherent complementarity. Perceptual capabilities are therefore heightened, enabling, for instance, the well-known "cocktail party" and McGurk effects, i.e., speech disambiguation from a panoply of sound signals. This fusion ability is also key in refining the perception of sound source location, as in distinguishing whose voice is being heard in a group conversation. Furthermore, neuroscience has successfully identified the superior colliculus region in the brain as the one responsible for this modality fusion, with a handful of biological models having been proposed to approach its underlying neurophysiological process. Deriving inspiration from one of these models, this paper presents a methodology for effectively fusing correlated auditory and visual information for active speaker detection. Such an ability can have a wide range of applications, from teleconferencing systems to social robotics. The detection approach initially routes auditory and visual information through two specialized neural network structures. The resulting embeddings are fused via a novel layer based on the superior colliculus, whose topological structure emulates spatial neuron cross-mapping of unimodal perceptual fields. The validation process employed two publicly available datasets, with achieved results confirming and greatly surpassing initial expectations.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Helen Feigin ◽  
Shira Baror ◽  
Moshe Bar ◽  
Adam Zaidel

AbstractPerceptual decisions are biased by recent perceptual history—a phenomenon termed 'serial dependence.' Here, we investigated what aspects of perceptual decisions lead to serial dependence, and disambiguated the influences of low-level sensory information, prior choices and motor actions. Participants discriminated whether a brief visual stimulus lay to left/right of the screen center. Following a series of biased ‘prior’ location discriminations, subsequent ‘test’ location discriminations were biased toward the prior choices, even when these were reported via different motor actions (using different keys), and when the prior and test stimuli differed in color. By contrast, prior discriminations about an irrelevant stimulus feature (color) did not substantially influence subsequent location discriminations, even though these were reported via the same motor actions. Additionally, when color (not location) was discriminated, a bias in prior stimulus locations no longer influenced subsequent location discriminations. Although low-level stimuli and motor actions did not trigger serial-dependence on their own, similarity of these features across discriminations boosted the effect. These findings suggest that relevance across perceptual decisions is a key factor for serial dependence. Accordingly, serial dependence likely reflects a high-level mechanism by which the brain predicts and interprets new incoming sensory information in accordance with relevant prior choices.


2021 ◽  
Author(s):  
Shachar Sherman ◽  
Koichi Kawakami ◽  
Herwig Baier

The brain is assembled during development by both innate and experience-dependent mechanisms1-7, but the relative contribution of these factors is poorly understood. Axons of retinal ganglion cells (RGCs) connect the eye to the brain, forming a bottleneck for the transmission of visual information to central visual areas. RGCs secrete molecules from their axons that control proliferation, differentiation and migration of downstream components7-9. Spontaneously generated waves of retinal activity, but also intense visual stimulation, can entrain responses of RGCs10 and central neurons11-16. Here we asked how the cellular composition of central targets is altered in a vertebrate brain that is depleted of retinal input throughout development. For this, we first established a molecular catalog17 and gene expression atlas18 of neuronal subpopulations in the retinorecipient areas of larval zebrafish. We then searched for changes in lakritz (atoh7-) mutants, in which RGCs do not form19. Although individual forebrain-expressed genes are dysregulated in lakritz mutants, the complete set of 77 putative neuronal cell types in thalamus, pretectum and tectum are present. While neurogenesis and differentiation trajectories are overall unaltered, a greater proportion of cells remain in an uncommitted progenitor stage in the mutant. Optogenetic stimulation of a pretectal area20,21 evokes a visual behavior in blind mutants indistinguishable from wildtype. Our analysis shows that, in this vertebrate visual system, neurons are produced more slowly, but specified and wired up in a proper configuration in the absence of any retinal signals.


2018 ◽  
Vol 2018 ◽  
pp. 1-15 ◽  
Author(s):  
Xinxin Yang ◽  
Haibo Yang ◽  
Fengdi Wu ◽  
Zhipeng Qi ◽  
Jiashuo Li ◽  
...  

Excessive manganese (Mn) can accumulate in the striatum of the brain following overexposure. Oxidative stress is a well-recognized mechanism in Mn-induced neurotoxicity. It has been proven that glutathione (GSH) depletion is a key factor in oxidative damage during Mn exposure. However, no study has focused on the dysfunction of GSH synthesis-induced oxidative stress in the brain during Mn exposure. The objective of the present study was to explore the mechanism of Mn disruption of GSH synthesis via EAAC1 and xCT in vitro and in vivo. Primary neurons and astrocytes were cultured and treated with different doses of Mn to observe the state of cells and levels of GSH and reactive oxygen species (ROS) and measure mRNA and protein expression of EAAC1 and xCT. Mice were randomly divided into seven groups, which received saline, 12.5, 25, and 50 mg/kg MnCl2, 500 mg/kg AAH (EAAC1 inhibitor) + 50 mg/kg MnCl2, 75 mg/kg SSZ (xCT inhibitor) + 50 mg/kg MnCl2, and 100 mg/kg NAC (GSH rescuer) + 50 mg/kg MnCl2 once daily for two weeks. Then, levels of EAAC1, xCT, ROS, GSH, malondialdehyde (MDA), protein sulfhydryl, carbonyl, 8-hydroxy-2-deoxyguanosine (8-OHdG), and morphological and ultrastructural features in the striatum of mice were measured. Mn reduced protein levels, mRNA expression, and immunofluorescence intensity of EAAC1 and xCT. Mn also decreased the level of GSH, sulfhydryl, and increased ROS, MDA, 8-OHdG, and carbonyl in a dose-dependent manner. Injury-related pathological and ultrastructure changes in the striatum of mice were significantly present. In conclusion, excessive exposure to Mn disrupts GSH synthesis through inhibition of EAAC1 and xCT to trigger oxidative damage in the striatum.


2020 ◽  
Author(s):  
Haider Al-Tahan ◽  
Yalda Mohsenzadeh

AbstractWhile vision evokes a dense network of feedforward and feedback neural processes in the brain, visual processes are primarily modeled with feedforward hierarchical neural networks, leaving the computational role of feedback processes poorly understood. Here, we developed a generative autoencoder neural network model and adversarially trained it on a categorically diverse data set of images. We hypothesized that the feedback processes in the ventral visual pathway can be represented by reconstruction of the visual information performed by the generative model. We compared representational similarity of the activity patterns in the proposed model with temporal (magnetoencephalography) and spatial (functional magnetic resonance imaging) visual brain responses. The proposed generative model identified two segregated neural dynamics in the visual brain. A temporal hierarchy of processes transforming low level visual information into high level semantics in the feedforward sweep, and a temporally later dynamics of inverse processes reconstructing low level visual information from a high level latent representation in the feedback sweep. Our results append to previous studies on neural feedback processes by presenting a new insight into the algorithmic function and the information carried by the feedback processes in the ventral visual pathway.Author summaryIt has been shown that the ventral visual cortex consists of a dense network of regions with feedforward and feedback connections. The feedforward path processes visual inputs along a hierarchy of cortical areas that starts in early visual cortex (an area tuned to low level features e.g. edges/corners) and ends in inferior temporal cortex (an area that responds to higher level categorical contents e.g. faces/objects). Alternatively, the feedback connections modulate neuronal responses in this hierarchy by broadcasting information from higher to lower areas. In recent years, deep neural network models which are trained on object recognition tasks achieved human-level performance and showed similar activation patterns to the visual brain. In this work, we developed a generative neural network model that consists of encoding and decoding sub-networks. By comparing this computational model with the human brain temporal (magnetoencephalography) and spatial (functional magnetic resonance imaging) response patterns, we found that the encoder processes resemble the brain feedforward processing dynamics and the decoder shares similarity with the brain feedback processing dynamics. These results provide an algorithmic insight into the spatiotemporal dynamics of feedforward and feedback processes in biological vision.


Sign in / Sign up

Export Citation Format

Share Document