scholarly journals Learning improves conscious access at the bottom, but not the top: Reverse hierarchical effects in perceptual learning and metacognition

2016 ◽  
Author(s):  
Benjamin Chen ◽  
Matthew Mundy ◽  
Naotsugu Tsuchiya

AbstractExperience with visual stimuli can improve their perceptual performance, a phenomenon termed visual perceptual learning (VPL), but how does VPL shape our conscious experience of learned stimuli? VPL has been found to improve measures of metacognition, suggesting increased conscious stimulus accessibility. Such studies however, have largely failed to control objective task accuracy, which typically correlates with metacognition. Here, using a staircase method to control this confound, we investigated whether VPL improves the metacognitive accuracy of perceptual judgements. Across three consecutive days, subjects learned to discriminate faces based on either their identity or contrast. Holding objective accuracy constant, perceptual thresholds improved in both tasks, while metacognitive accuracy diverged, with face contrast VPL improving metacognition, and face identity VPL failing to. Our findings can be interpreted in a reverse hierarchy theory-like model of VPL, which counterintuitively predicts that the VPL of low- but not high-level stimulus properties should improve conscious stimulus accessibility.

2018 ◽  
Author(s):  
Ruyuan Zhang ◽  
Duje Tadin

ABSTRACTVisual perceptual learning (VPL) can lead to long-lasting perceptual improvements. While the efficacy of VPL is well established, there is still a considerable debate about what mechanisms underlie the effects of VPL. Much of this debate concentrates on where along the visual processing hierarchy behaviorally relevant plasticity takes place. Here, we aimed to tackle this question in context of motion processing, a domain where links between behavior and processing hierarchy are well established. Specifically, we took advantage of an established transition from component-dependent representations at the earliest level to pattern-dependent representations at the middle-level of cortical motion processing. We trained two groups of participants on the same motion direction identification task using either grating or plaid stimuli. A set of pre- and post-training tests was used to determine the degree of learning specificity and generalizability. This approach allowed us to disentangle contributions from both low- and mid-level motion processing, as well as high-level cognitive changes. We observed a complete bi-directional transfer of learning between component and pattern stimuli as long as they shared the same apparent motion direction. This result indicates learning-induced plasticity at intermediate levels of motion processing. Moreover, we found that motion VPL is specific to the trained stimulus direction, speed, size, and contrast, highlighting the pivotal role of basic visual features in VPL, and diminishing the possibility of non-sensory decision-level enhancements. Taken together, our study psychophysically examined a variety of factors mediating motion VPL, and demonstrated that motion VPL most likely alters visual computation in the middle stage of motion processing.


2021 ◽  
Author(s):  
Ning Mei ◽  
Roberto Santana ◽  
David Soto

AbstractDespite advances in the neuroscience of visual consciousness over the last decades, we still lack a framework for understanding the scope of unconscious processing and how it relates to conscious experience. Previous research observed brain signatures of unconscious contents in visual cortex, but these have not been identified in a reliable manner, with low trial numbers and signal detection theoretic constraints not allowing to decisively discard conscious perception. Critically, the extent to which unconscious content is represented in high-level processing stages along the ventral visual stream and linked prefrontal areas remains unknown. Using a within-subject, high-precision, highly-sampled fMRI approach, we show that unconscious contents, even those associated with null sensitivity, can be reliably decoded from multivoxel patterns that are highly distributed along the ventral visual pathway and also involving prefrontal substrates. Notably, the neural representation in these areas generalised across conscious and unconscious visual processing states, placing constraints on prior findings that fronto-parietal substrates support the representation of conscious contents and suggesting revisions to models of consciousness such as the neuronal global workspace. We then provide a computational model simulation of visual information processing/representation in the absence of perceptual sensitivity by using feedforward convolutional neural networks trained to perform a similar visual task to the human observers. The work provides a novel framework for pinpointing the neural representation of unconscious knowledge across different task domains.


2015 ◽  
Vol 1612 ◽  
pp. 140-151 ◽  
Author(s):  
Jyoti Mishra ◽  
Camarin Rolle ◽  
Adam Gazzaley

2012 ◽  
Vol 65 (6) ◽  
pp. 1123-1138 ◽  
Author(s):  
Daniel de Zilva ◽  
Chris J. Mitchell

Human participants received exposure to similar visual stimuli (AW and BW) that shared a common feature (W). Experiment 1 demonstrated that subsequent discrimination between AW and BW was more accurate when the two stimuli were preexposed on an intermixed schedule (AW, BW, AW, BW…) than when they were preexposed on a blocked schedule (AW, AW…BW, BW…): the intermixed–blocked effect. Furthermore, memory for the unique features of the stimuli (A and B) was better when the stimuli were preexposed on an intermixed schedule than when they were preexposed on a blocked schedule. Conversely, memory for the common features of the stimuli (W) was better when the stimuli were preexposed on a blocked schedule than when they were preexposed on an intermixed schedule. Experiment 2 again demonstrated the intermixed–blocked effect, but participants were preexposed to the stimuli in such a way that the temporal spacing between exposures to the unique features was equated between schedules. Memory for the unique and common features was similar to that found in Experiment 1. These findings support the proposal that perceptual learning depends on a mechanism that enhances memory for the unique features and reduces memory for common features.


Vision ◽  
2001 ◽  
Author(s):  
NICOLETTA BERARDI ◽  
ADRIANA FIORENTINI

SLEEP ◽  
2017 ◽  
Vol 40 (suppl_1) ◽  
pp. A85-A85 ◽  
Author(s):  
M Tamaki ◽  
T Watanabe ◽  
Y Sasaki

Sign in / Sign up

Export Citation Format

Share Document