scholarly journals Temporal neural mechanisms underlying conscious access to different levels of facial stimulus contents

2017 ◽  
Author(s):  
Shen-Mou Hsu ◽  
Yu-Fang Yang

ABSTRACTAn important issue facing the empirical study of consciousness concerns how the contents of incoming stimuli gain access to conscious processing. According to classic theories, facial stimuli are processed in a hierarchical manner. However, it remains unclear how the brain determines which level of stimulus contents is consciously accessible when facing an incoming facial stimulus. Accordingly, with a magnetoencephalography technique, this study aims to investigate the temporal dynamics of the neural mechanism mediating which level of stimulus content is consciously accessible. Participants were instructed to view masked target faces at threshold, so that according to behavioral responses, their perceptual awareness alternated from consciously accessing facial identity in some trials to being able to consciously access facial configuration features but not facial identity in other trials. Conscious access at these two levels of facial contents were associated with a series of differential neural events. Before target presentation, different patterns of phase angle adjustment were observed between the two types of conscious access. This effect was followed by stronger phase clustering for awareness of facial identity immediately during stimulus presentation. After target onset, conscious access to facial identity, as opposed to facial configural features, was able to elicit more robust late positivity. In conclusion, we suggest that the stages of neural events, ranging from prestimulus to stimulus-related activities, may operate in combination to determine which level of stimulus contents is consciously accessed. Conscious access may thus be better construed as comprising various forms that depend on the level of stimulus contents accessed.

2018 ◽  
Vol 119 (4) ◽  
pp. 1356-1366 ◽  
Author(s):  
Shen-Mou Hsu ◽  
Yu-Fang Yang

An important issue facing the empirical study of consciousness concerns how the contents of incoming stimuli gain access to conscious processing. According to classic theories, facial stimuli are processed in a hierarchical manner. However, it remains unclear how the brain determines which level of stimulus content is consciously accessible when facing an incoming facial stimulus. Accordingly, with a magnetoencephalography technique, this study aims to investigate the temporal dynamics of the neural mechanism mediating which level of stimulus content is consciously accessible. Participants were instructed to view masked target faces at threshold so that, according to behavioral responses, their perceptual awareness alternated from consciously accessing facial identity in some trials to being able to consciously access facial configuration features but not facial identity in other trials. Conscious access at these two levels of facial contents were associated with a series of differential neural events. Before target presentation, different patterns of phase angle adjustment were observed between the two types of conscious access. This effect was followed by stronger phase clustering for awareness of facial identity immediately during stimulus presentation. After target onset, conscious access to facial identity, as opposed to facial configural features, was able to elicit more robust late positivity. In conclusion, we suggest that the stages of neural events, ranging from prestimulus to stimulus-related activities, may operate in combination to determine which level of stimulus contents is consciously accessed. Conscious access may thus be better construed as comprising various forms that depend on the level of stimulus contents accessed. NEW & NOTEWORTHY The present study investigates how the brain determines which level of stimulus contents is consciously accessible when facing an incoming facial stimulus. Using magnetoencephalography, we show that prestimulus activities together with stimulus-related activities may operate in combination to determine conscious face detection or identification. This finding is distinct from the previous notion that conscious face detection precedes identification and provides novel insights into the temporal dynamics of different levels of conscious face perception.


2020 ◽  
Author(s):  
Lluís Hernández-Navarro ◽  
Ainhoa Hermoso-Mendizabal ◽  
Daniel Duque ◽  
Alexandre Hyafil ◽  
Jaime de la Rocha

It is commonly assumed that, during perceptual decisions, the brain integrates stimulus evidence until reaching a decision, and then performs the response. There are conditions, however (e.g. time pressure), in which the initiation of the response must be prepared in anticipation of the stimulus presentation. It is therefore not clear when the timing and the choice of perceptual responses depend exclusively on evidence accumulation, or when preparatory motor signals may interfere with this process. Here, we find that, in a free reaction time auditory discrimination task in rats, the timing of fast responses does not depend on the stimulus, although the choices do, suggesting a decoupling of the mechanisms of action initiation and choice selection. This behavior is captured by a novel model, the Parallel Sensory Integration and Action Model (PSIAM), in which response execution is triggered whenever one of two processes, Action Initiation or Evidence Accumulation, reaches a bound, while choice category is always set by the latter. Based on this separation, the model accurately predicts the distribution of reaction times when the stimulus is omitted, advanced or delayed. Furthermore, we show that changes in Action Initiation mediates both post-error slowing and a gradual slowing of the responses within each session. Overall, these results extend the standard models of perceptual decision-making, and shed a new light on the interaction between action preparation and evidence accumulation.


2019 ◽  
Vol 121 (5) ◽  
pp. 1588-1590 ◽  
Author(s):  
Luca Casartelli

Neural, oscillatory, and computational counterparts of multisensory processing remain a crucial challenge for neuroscientists. Converging evidence underlines a certain efficiency in balancing stability and flexibility of sensory sampling, supporting the general idea that multiple parallel and hierarchically organized processing stages in the brain contribute to our understanding of the (sensory/perceptual) world. Intriguingly, how temporal dynamics impact and modulate multisensory processes in our brain can be investigated benefiting from studies on perceptual illusions.


2015 ◽  
Vol 370 (1668) ◽  
pp. 20140170 ◽  
Author(s):  
Riitta Hari ◽  
Lauri Parkkonen

We discuss the importance of timing in brain function: how temporal dynamics of the world has left its traces in the brain during evolution and how we can monitor the dynamics of the human brain with non-invasive measurements. Accurate timing is important for the interplay of neurons, neuronal circuitries, brain areas and human individuals. In the human brain, multiple temporal integration windows are hierarchically organized, with temporal scales ranging from microseconds to tens and hundreds of milliseconds for perceptual, motor and cognitive functions, and up to minutes, hours and even months for hormonal and mood changes. Accurate timing is impaired in several brain diseases. From the current repertoire of non-invasive brain imaging methods, only magnetoencephalography (MEG) and scalp electroencephalography (EEG) provide millisecond time-resolution; our focus in this paper is on MEG. Since the introduction of high-density whole-scalp MEG/EEG coverage in the 1990s, the instrumentation has not changed drastically; yet, novel data analyses are advancing the field rapidly by shifting the focus from the mere pinpointing of activity hotspots to seeking stimulus- or task-specific information and to characterizing functional networks. During the next decades, we can expect increased spatial resolution and accuracy of the time-resolved brain imaging and better understanding of brain function, especially its temporal constraints, with the development of novel instrumentation and finer-grained, physiologically inspired generative models of local and network activity. Merging both spatial and temporal information with increasing accuracy and carrying out recordings in naturalistic conditions, including social interaction, will bring much new information about human brain function.


Author(s):  
Fangfang Wen ◽  
Bin Zuo ◽  
Yang Wang ◽  
Shuhan Ma ◽  
Shijie Song ◽  
...  

AbstractPast research on women’s preferences for male facial masculinity in Western cultures has produced inconsistent results. Some inconsistency may be related to the use of different facial stimulus manipulations (e.g., between-sex sexual dimorphic facial manipulation or within-sex sexual dimorphic facial manipulation) that do not perfectly avoid non-facial cues, and pregnancy status may also influence women’s face preferences. We therefore recruited pregnant and nonpregnant Chinese women and manipulated the sexual dimorphism of male facial stimuli to explore the influences of manipulation methods, non-facial cues, and pregnancy status on face preferences. Results showed that: (1) in contrast with a general masculinity preference observed in Western cultures, both pregnant and nonpregnant Chinese women preferred feminized and neutral male faces generally; (2) pregnant women’s preference for feminized male faces was stable across manipulation methods, while nonpregnant women preferred feminized male faces except under between-sex sexual dimorphism manipulation; and (3) manipulation methods, rather than non-facial cues, influenced participants’ face preferences. Specifically, women showed the strongest preferences for femininity when face stimuli were manipulated by within-sex sexual dimorphic facial manipulation, followed by unmanipulated faces and between-sex sexual dimorphic facial manipulation. This effect was stronger for nonpregnant women in the unmanipulated condition and for pregnant women in the between-sex sexual dimorphic facial manipulation. This research provides empirical evidence of women’s preferences for sexual dimorphism in male faces in a non-Western culture, as well as the effects of facial manipulation methods, pregnancy status, and the interactions between these factors.


Author(s):  
Maria Del Vecchio

The neural correlates of perceptual awareness are usually investigated by comparing experimental conditions in which subjects are aware or not aware of the delivered stimulus. This, however implies that subjects report their experience, possibly biasing the neural responses with the post-perceptual processes involved. This Neuro Forum article reviews evidence from an electroencephalography (EEG) study by Cohen and colleagues (Cohen M. et al. Journal of Neuroscience 40 (25) 4925-4935) addressing the importance of no-report paradigms in the neuroscience of consciousness. In particular, authors shows of P3b, one of the proposed canonical "signatures" of the conscious processing, is strongly elicited only when subjects have to report their experience, proposing a reconsideration in the approach to the neuroscience of consciousness.


1989 ◽  
Vol 155 (S7) ◽  
pp. 93-98 ◽  
Author(s):  
Nancy C. Andreasen

When Kraepelin originally defined and described dementia praecox, he assumed that it was due to some type of neural mechanism. He hypothesised that abnormalities could occur in a variety of brain regions, including the prefrontal, auditory, and language regions of the cortex. Many members of his department, including Alzheimer and Nissl, were actively involved in the search for the neuropathological lesions that would characterise schizophrenia. Although Kraepelin did not use the term ‘negative symptoms', he describes them comprehensively and states explicitly that he believes the symptoms of schizophrenia can be explained in terms of brain dysfunction:“If it should be confirmed that the disease attacks by preference the frontal areas of the brain, the central convolutions and central lobes, this distribution would in a certain measure agree with our present views about the site of the psychic mechanisms which are principally injured by the disease. On various grounds, it is easy to believe that the frontal cortex, which is specially well developed in man, stands in closer relation to his higher intellectual abilities, and these are the faculties which in our patients invariably suffer profound loss in contrast to memory and acquired ability.” Kraepelin (1919, p. 219)


2008 ◽  
Vol 24 (3) ◽  
pp. 419-429 ◽  
Author(s):  
Anthony Landreth ◽  
John Bickle

We briefly describe ways in which neuroeconomics has made contributions to its contributing disciplines, especially neuroscience, and a specific way in which it could make future contributions to both. The contributions of a scientific research programme can be categorized in terms of (1) description and classification of phenomena, (2) the discovery of causal relationships among those phenomena, and (3) the development of tools to facilitate (1) and (2). We consider ways in which neuroeconomics has advanced neuroscience and economics along each line. Then, focusing on electrophysiological methods, we consider a puzzle within neuroeconomics whose solution we believe could facilitate contributions to both neuroscience and economics, in line with category (2). This puzzle concerns how the brain assigns reward values to otherwise incomparable stimuli. According to the common currency hypothesis, dopamine release is a component of a neural mechanism that solves comparability problems. We review two versions of the common currency hypothesis, one proposed by Read Montague and colleagues, the other by William Newsome and colleagues, and fit these hypotheses into considerations of rational choice.


2019 ◽  
Author(s):  
Ulrik Beierholm ◽  
Tim Rohe ◽  
Ambra Ferrari ◽  
Oliver Stegle ◽  
Uta Noppeney

AbstractTo form the most reliable percept of the environment, the brain needs to represent sensory uncertainty. Current theories of perceptual inference assume that the brain computes sensory uncertainty instantaneously and independently for each stimulus.In a series of psychophysics experiments human observers localized auditory signals that were presented in synchrony with spatially disparate visual signals. Critically, the visual noise changed dynamically over time with or without intermittent jumps. Our results show that observers integrate audiovisual inputs weighted by sensory reliability estimates that combine information from past and current signals as predicted by an optimal Bayesian learner or approximate strategies of exponential discountingOur results challenge classical models of perceptual inference where sensory uncertainty estimates depend only on the current stimulus. They demonstrate that the brain capitalizes on the temporal dynamics of the external world and estimates sensory uncertainty by combining past experiences with new incoming sensory signals.


2019 ◽  
Author(s):  
David A. Tovar ◽  
Micah M. Murray ◽  
Mark T. Wallace

AbstractObjects are the fundamental building blocks of how we create a representation of the external world. One major distinction amongst objects is between those that are animate versus inanimate. Many objects are specified by more than a single sense, yet the nature by which multisensory objects are represented by the brain remains poorly understood. Using representational similarity analysis of human EEG signals, we show enhanced encoding of audiovisual objects when compared to their corresponding visual and auditory objects. Surprisingly, we discovered the often-found processing advantages for animate objects was not evident in a multisensory context due to greater neural enhancement of inanimate objects—the more weakly encoded objects under unisensory conditions. Further analysis showed that the selective enhancement of inanimate audiovisual objects corresponded with an increase in shared representations across brain areas, suggesting that neural enhancement was mediated by multisensory integration. Moreover, a distance-to-bound analysis provided critical links between neural findings and behavior. Improvements in neural decoding at the individual exemplar level for audiovisual inanimate objects predicted reaction time differences between multisensory and unisensory presentations during a go/no-go animate categorization task. Interestingly, links between neural activity and behavioral measures were most prominent 100 to 200ms and 350 to 500ms after stimulus presentation, corresponding to time periods associated with sensory evidence accumulation and decision-making, respectively. Collectively, these findings provide key insights into a fundamental process the brain uses to maximize information it captures across sensory systems to perform object recognition.Significance StatementOur world is filled with an ever-changing milieu of sensory information that we are able to seamlessly transform into meaningful perceptual experience. We accomplish this feat by combining different features from our senses to construct objects. However, despite the fact that our senses do not work in isolation but rather in concert with each other, little is known about how the brain combines the senses together to form object representations. Here, we used EEG and machine learning to study how the brain processes auditory, visual, and audiovisual objects. Surprisingly, we found that non-living objects, the objects which were more difficult to process with one sense alone, benefited the most from engaging multiple senses.


Sign in / Sign up

Export Citation Format

Share Document