visual statistical learning
Recently Published Documents


TOTAL DOCUMENTS

134
(FIVE YEARS 38)

H-INDEX

16
(FIVE YEARS 3)

2022 ◽  
pp. 174702182210746
Author(s):  
Jolene Alexa Cox ◽  
Timothy Walter Cox ◽  
Anne Marie Aimola Davies

Our visual system is built to extract regularities in how objects within our visual environment appear in relation to each other across time and space (‘visual statistical learning’). Existing research indicates that visual statistical learning is modulated by selective attention. Our attentional system prioritises information that enables behaviour; for example, animates are prioritised over inanimates (the ‘animacy advantage’). The present study examined the effects of selective attention and animacy on visual statistical learning in young adults (N = 284). We tested visual statistical learning of attended and unattended information across four animacy conditions: (i) living things that can self-initiate movement (animals); (ii) living things that cannot self-initiate movement (fruits and vegetables); (iii) non-living things that can generate movement (vehicles); and (iv) non-living things that cannot generate movement (tools and kitchen utensils). We implemented a four-point confidence-rating scale as an assessment of participants’ awareness of the regularities in the visual statistical learning task. There were four key findings. First, selective attention plays a critical role by modulating visual statistical learning. Second, animacy does not play a special role in visual statistical learning. Third, visual statistical learning of attended information cannot be exclusively accounted for by unconscious knowledge. Fourth, performance on the visual statistical learning task is associated with the proportion of stimuli that were named or labelled. Our findings support the notion that visual statistical learning is a powerful mechanism by which our visual system resolves an abundance of sensory input over time.


2021 ◽  
Author(s):  
Francisco Vicente-Conesa ◽  
Tamara Giménez-Fernández ◽  
David Luque ◽  
Miguel A. Vadillo

The additional singleton task has become a popular paradigm to explore visual statistical learning and selective attention. In this task, participants are instructed to find a different-shaped target among a series of distractors as fast as possible. In some trials, the search display includes a singleton distractor with a different colour, making search harder. This singleton distractor appears more often in one location than in all the remaining locations. The typical results of these experiments show that participants learn to ignore the area of the screen that is more likely to contain the singleton distractor. It is often claimed that this learning takes place unconsciously, because at the end of the experiment participants seem to be unable to identify the location where the singleton distractor appeared most frequently during the task. In the present study, we tested participants’ awareness in three high-powered experiments using alternative measures. Contrary to previous studies, the results show clear evidence of explicit knowledge about which area of the display was more likely to contain the singleton distractor, suggesting that this type of learning might not be unconscious.


Infancy ◽  
2021 ◽  
Author(s):  
Julie Bertels ◽  
Estibaliz San Anton ◽  
Emeline Boursain ◽  
Hermann Bulf ◽  
Arnaud Destrebecqz

Author(s):  
Su Hyoun Park ◽  
Leeland L. Rogers ◽  
Matthew R. Johnson ◽  
Timothy J. Vickery

PLoS ONE ◽  
2020 ◽  
Vol 15 (12) ◽  
pp. e0243100
Author(s):  
Katie L. Richards ◽  
Povilas Karvelis ◽  
Stephen M. Lawrie ◽  
Peggy Seriès

Background Deficits in visual statistical learning and predictive processing could in principle explain the key characteristics of inattention and distractibility in attention deficit hyperactivity disorder (ADHD). Specifically, from a Bayesian perspective, ADHD may be associated with flatter likelihoods (increased sensory processing noise), and/or difficulties in generating or using predictions. To our knowledge, such hypotheses have never been directly tested. Methods We here test these hypotheses by evaluating whether adults diagnosed with ADHD (n = 17) differed from a control group (n = 30) in implicitly learning and using low-level perceptual priors to guide sensory processing. We used a visual statistical learning task in which participants had to estimate the direction of a cloud of coherently moving dots. Unbeknown to the participants, two of the directions were more frequently presented than the others, creating an implicit bias (prior) towards those directions. This task had previously revealed differences in other neurodevelopmental disorders, such as autistic spectrum disorder and schizophrenia. Results We found that both groups acquired the prior expectation for the most frequent directions and that these expectations substantially influenced task performance. Overall, there were no group differences in how much the priors influenced performance. However, subtle group differences were found in the influence of the prior over time. Conclusion Our findings suggest that the symptoms of inattention and hyperactivity in ADHD do not stem from broad difficulties in developing and/or using low-level perceptual priors.


Sign in / Sign up

Export Citation Format

Share Document