scholarly journals Cross-Talk of Low-Level Sensory and High-Level Cognitive Processing: Development, Mechanisms, and Relevance for Cross-Modal Abilities of the Brain

2020 ◽  
Vol 14 ◽  
Author(s):  
Xiaxia Xu ◽  
Ileana L. Hanganu-Opatz ◽  
Malte Bieler
2021 ◽  
pp. 1-15
Author(s):  
Leor Zmigrod

Abstract Ideological behavior has traditionally been viewed as a product of social forces. Nonetheless, an emerging science suggests that ideological worldviews can also be understood in terms of neural and cognitive principles. The article proposes a neurocognitive model of ideological thinking, arguing that ideological worldviews may be manifestations of individuals’ perceptual and cognitive systems. This model makes two claims. First, there are neurocognitive antecedents to ideological thinking: the brain’s low-level neurocognitive dispositions influence its receptivity to ideological doctrines. Second, there are neurocognitive consequences to ideological engagement: strong exposure and adherence to ideological doctrines can shape perceptual and cognitive systems. This article details the neurocognitive model of ideological thinking and synthesizes the empirical evidence in support of its claims. The model postulates that there are bidirectional processes between the brain and the ideological environment, and so it can address the roles of situational and motivational factors in ideologically motivated action. This endeavor highlights that an interdisciplinary neurocognitive approach to ideologies can facilitate biologically informed accounts of the ideological brain and thus reveal who is most susceptible to extreme and authoritarian ideologies. By investigating the relationships between low-level perceptual processes and high-level ideological attitudes, we can develop a better grasp of our collective history as well as the mechanisms that may structure our political futures.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Helen Feigin ◽  
Shira Baror ◽  
Moshe Bar ◽  
Adam Zaidel

AbstractPerceptual decisions are biased by recent perceptual history—a phenomenon termed 'serial dependence.' Here, we investigated what aspects of perceptual decisions lead to serial dependence, and disambiguated the influences of low-level sensory information, prior choices and motor actions. Participants discriminated whether a brief visual stimulus lay to left/right of the screen center. Following a series of biased ‘prior’ location discriminations, subsequent ‘test’ location discriminations were biased toward the prior choices, even when these were reported via different motor actions (using different keys), and when the prior and test stimuli differed in color. By contrast, prior discriminations about an irrelevant stimulus feature (color) did not substantially influence subsequent location discriminations, even though these were reported via the same motor actions. Additionally, when color (not location) was discriminated, a bias in prior stimulus locations no longer influenced subsequent location discriminations. Although low-level stimuli and motor actions did not trigger serial-dependence on their own, similarity of these features across discriminations boosted the effect. These findings suggest that relevance across perceptual decisions is a key factor for serial dependence. Accordingly, serial dependence likely reflects a high-level mechanism by which the brain predicts and interprets new incoming sensory information in accordance with relevant prior choices.


2020 ◽  
Author(s):  
Haider Al-Tahan ◽  
Yalda Mohsenzadeh

AbstractWhile vision evokes a dense network of feedforward and feedback neural processes in the brain, visual processes are primarily modeled with feedforward hierarchical neural networks, leaving the computational role of feedback processes poorly understood. Here, we developed a generative autoencoder neural network model and adversarially trained it on a categorically diverse data set of images. We hypothesized that the feedback processes in the ventral visual pathway can be represented by reconstruction of the visual information performed by the generative model. We compared representational similarity of the activity patterns in the proposed model with temporal (magnetoencephalography) and spatial (functional magnetic resonance imaging) visual brain responses. The proposed generative model identified two segregated neural dynamics in the visual brain. A temporal hierarchy of processes transforming low level visual information into high level semantics in the feedforward sweep, and a temporally later dynamics of inverse processes reconstructing low level visual information from a high level latent representation in the feedback sweep. Our results append to previous studies on neural feedback processes by presenting a new insight into the algorithmic function and the information carried by the feedback processes in the ventral visual pathway.Author summaryIt has been shown that the ventral visual cortex consists of a dense network of regions with feedforward and feedback connections. The feedforward path processes visual inputs along a hierarchy of cortical areas that starts in early visual cortex (an area tuned to low level features e.g. edges/corners) and ends in inferior temporal cortex (an area that responds to higher level categorical contents e.g. faces/objects). Alternatively, the feedback connections modulate neuronal responses in this hierarchy by broadcasting information from higher to lower areas. In recent years, deep neural network models which are trained on object recognition tasks achieved human-level performance and showed similar activation patterns to the visual brain. In this work, we developed a generative neural network model that consists of encoding and decoding sub-networks. By comparing this computational model with the human brain temporal (magnetoencephalography) and spatial (functional magnetic resonance imaging) response patterns, we found that the encoder processes resemble the brain feedforward processing dynamics and the decoder shares similarity with the brain feedback processing dynamics. These results provide an algorithmic insight into the spatiotemporal dynamics of feedforward and feedback processes in biological vision.


2021 ◽  
Author(s):  
Meng Liu ◽  
Wenshan Dong ◽  
Shaozheng Qin ◽  
Tom Verguts ◽  
Qi Chen

AbstractHuman perception and learning is thought to rely on a hierarchical generative model that is continuously updated via precision-weighted prediction errors (pwPEs). However, the neural basis of such cognitive process and how it unfolds during decision making, remain poorly understood. To investigate this question, we combined a hierarchical Bayesian model (i.e., Hierarchical Gaussian Filter, HGF) with electrophysiological (EEG) recording, while participants performed a probabilistic reversal learning task in alternatingly stable and volatile environments. Behaviorally, the HGF fitted significantly better than two control, non-hierarchical, models. Neurally, low-level and high-level pwPEs were independently encoded by the P300 component. Low-level pwPEs were reflected in the theta (4-8 Hz) frequency band, but high-level pwPEs were not. Furthermore, the expressions of high-level pwPEs were stronger for participants with better HGF fit. These results indicate that the brain employs hierarchical learning, and encodes both low- and high-level learning signals separately and adaptively.


2020 ◽  
Vol 9 (2) ◽  
pp. 55-62
Author(s):  
Michael Holsworth ◽  

A fundamental skill required for vocabulary development is word recognition ability. According to Perfetti (1985), word recognition ability relies on low-level cognitive processing skill to be automatic and efficient in order for cognitive resources to be allocated to high-level processes such as inferencing and schemata activation needed for reading comprehension. The low-level processes include orthographic knowledge, semantic knowledge, and phonological awareness. These low-level processes must be efficient, fluent, and automatic in second language readers in order for them to achieve the ultimate goal of reading comprehension. This article briefly describes the concept of word recognition, its relation to vocabulary, and three tests that were designed to measure the three components of word recognition (orthographic, semantic, and phonological knowledge) in a longitudinal study that investigated the effects of word recognition training on reading comprehension.


2021 ◽  
Author(s):  
Amanda LeBel ◽  
Shailee Jain ◽  
Alexander G. Huth

AbstractThere is a growing body of research demonstrating that the cerebellum is involved in language understanding. Early theories assumed that the cerebellum is involved in low-level language processing. However, those theories are at odds with recent work demonstrating cerebellar activation during cognitive tasks. Using natural language stimuli and an encoding model framework, we performed an fMRI experiment where subjects passively listened to five hours of natural language stimuli which allowed us to analyze language processing in the cerebellum with higher precision than previous work. We used this data to fit voxelwise encoding models with five different feature spaces that span the hierarchy of language processing from acoustic input to high-level conceptual processing. Examining the prediction performance of these models on separate BOLD data shows that cerebellar responses to language are almost entirely explained by high-level conceptual language features rather than low-level acoustic or phonemic features. Additionally, we found that the cerebellum has a higher proportion of voxels that represent social semantic categories, which include “social” and “people” words, and lower representations of all other semantic categories, including “mental”, “concrete”, and “place” words, than cortex. This suggests that the cerebellum is representing language at a conceptual level with a preference for social information.Significance StatementRecent work has demonstrated that, beyond its typical role in motor planning, the cerebellum is implicated in a wide variety of tasks including language. However, little is known about the language representations in the cerebellum, or how those representations compare to cortex. Using voxelwise encoding models and natural language fMRI data, we demonstrate here that language representations are significantly different in the cerebellum as compared to cortex. Cerebellum language representations are almost entirely semantic, and the cerebellum contains over-representation of social semantic information as compared to cortex. These results suggest that the cerebellum is not involved in language processing per se, but cognitive processing more generally.


2021 ◽  
Author(s):  
Ro Julia Robotham ◽  
Sheila Kerry ◽  
Grace E Rice ◽  
Alex Leff ◽  
Matt Lambon Ralph ◽  
...  

Much of the patient literature on the visual recognition of faces, words and objects is based on single case studies of patients selected according to their symptom profile. The Back of the Brain project aims to provide novel insights into the cerebral and cortical architecture underlying visual recognition of complex stimuli by adopting a different approach. A large group of patients was recruited according to their lesion location (in the areas supplied by the posterior cerebral artery) rather than their symptomatology. All patients were assessed with the same battery of sensitive tests of visual perception enabling the identification of dissociations as well as associations between deficits in face, word and object recognition. This paper provides a detailed description of the extensive behavioural test battery that was developed for the Back of the Brain project and that enables assessment of low-level, intermediate and high-level visual perceptual abilities. •Extensive behavioural test battery for assessing low-level, intermediate and high-level visual perception in patients with posterior cerebral artery stroke •Method enabling direct comparison of visual face, word and object processing abilities in patients with posterior cerebral artery stroke


2017 ◽  
Author(s):  
Long Luu ◽  
Cheng Qiu ◽  
Alan A. Stocker

Ding et al. (1) recently proposed that the brain automatically encodes high-level, relative stimulus information (i.e. the ordinal relation between two lines), which it then uses to constrain the decoding of low-level, absolute stimulus features (i.e. when recalling the actual lines orientation). This is an interesting idea that is in line with the self-consistent Bayesian observer model (2, 3) and may have important implications for understanding how the brain processes sensory information. However, the notion suggested in Ding et al. (1) that the brain uses this decoding strategy because it improves perceptual performance is misleading. Here we clarify the decoding model and compare its perceptual performance under various noise and signal conditions.


Author(s):  
DJUWARI DJUWARI ◽  
DINESH K. KUMAR ◽  
SRIDHAR P. ARJUNAN ◽  
GANESH R. NAIK

Surface electromyogram (SEMG) has numerous applications, but the presence of artifacts and cross talk especially at low level of muscle activity makes the recordings unreliable. Spectral and temporal overlap can make the removal of artifacts and noise, or separation of relevant signals from other bioelectric signals extremely difficult. Identification of hand gestures using low level of SEMG is one application that has a number of applications but the presence of high level of cross talk makes such an application highly unreliable. Individual muscles may be considered as independent at the local level and this makes an argument for separating the signals using independent component analysis (ICA). In the recent past, due to the easy availability of ICA tools, a number of researchers have attempted to use ICA for this application. This paper reports research conducted to evaluate the use of ICA for the separation of muscle activity and removal of the artifacts from SEMG. It discusses some of the conditions that could affect the reliability of the separation and evaluates issues related to the properties of the signals and a number of sources. This paper also identifies the lack of suitable measure of quality of separation for bioelectric signals and it recommends and tests a more robust measure of separation. This paper also proposes semi-blind ICA approach with the combination of prior knowledge of SEMG sources with ICA to identify hand gestures using low level of SEMG recordings. The theoretical analysis and experimental results demonstrate that ICA is suitable for SEMG signals. The results demonstrate the limitations of such applications due to the inability of the system to identify the correct order and magnitude of the signals. This paper determines the suitability of the use of error between estimated and actual mixing matrix as a mean for identifying the quality of separation of the output. This work also demonstrates that semi-blind ICA can accurately identify complex hand gestures from the low-level SEMG recordings.


2017 ◽  
Vol 114 (43) ◽  
pp. E9115-E9124 ◽  
Author(s):  
Stephanie Ding ◽  
Christopher J. Cueva ◽  
Misha Tsodyks ◽  
Ning Qian

When a stimulus is presented, its encoding is known to progress from low- to high-level features. How these features are decoded to produce perception is less clear, and most models assume that decoding follows the same low- to high-level hierarchy of encoding. There are also theories arguing for global precedence, reversed hierarchy, or bidirectional processing, but they are descriptive without quantitative comparison with human perception. Moreover, observers often inspect different parts of a scene sequentially to form overall perception, suggesting that perceptual decoding requires working memory, yet few models consider how working-memory properties may affect decoding hierarchy. We probed decoding hierarchy by comparing absolute judgments of single orientations and relative/ordinal judgments between two sequentially presented orientations. We found that lower-level, absolute judgments failed to account for higher-level, relative/ordinal judgments. However, when ordinal judgment was used to retrospectively decode memory representations of absolute orientations, striking aspects of absolute judgments, including the correlation and forward/backward aftereffects between two reported orientations in a trial, were explained. We propose that the brain prioritizes decoding of higher-level features because they are more behaviorally relevant, and more invariant and categorical, and thus easier to specify and maintain in noisy working memory, and that more reliable higher-level decoding constrains less reliable lower-level decoding.


Sign in / Sign up

Export Citation Format

Share Document