Bridging the gap between speech segmentation and word-to-world mappings: Evidence from an audiovisual statistical learning task

2011 ◽  
Author(s):  
T. Cunillera
2010 ◽  
Vol 63 (3) ◽  
pp. 295-305 ◽  
Author(s):  
Toni Cunillera ◽  
Matti Laine ◽  
Estela Càmara ◽  
Antoni Rodríguez-Fornells

Author(s):  
Ana Franco ◽  
Julia Eberlen ◽  
Arnaud Destrebecqz ◽  
Axel Cleeremans ◽  
Julie Bertels

Abstract. The Rapid Serial Visual Presentation procedure is a method widely used in visual perception research. In this paper we propose an adaptation of this method which can be used with auditory material and enables assessment of statistical learning in speech segmentation. Adult participants were exposed to an artificial speech stream composed of statistically defined trisyllabic nonsense words. They were subsequently instructed to perform a detection task in a Rapid Serial Auditory Presentation (RSAP) stream in which they had to detect a syllable in a short speech stream. Results showed that reaction times varied as a function of the statistical predictability of the syllable: second and third syllables of each word were responded to faster than first syllables. This result suggests that the RSAP procedure provides a reliable and sensitive indirect measure of auditory statistical learning.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Chiara Santolin ◽  
Orsola Rosa-Salva ◽  
Bastien S. Lemaire ◽  
Lucia Regolin ◽  
Giorgio Vallortigara

Abstract Statistical learning is a key mechanism for detecting regularities from a variety of sensory inputs. Precocial newborn domestic chicks provide an excellent model for (1) exploring unsupervised forms of statistical learning in a comparative perspective, and (2) elucidating the ecological function of statistical learning using imprinting procedures. Here we investigated the role of the sex of the chicks in modulating the direction of preference (for familiarity or novelty) in a visual statistical learning task already employed with chicks and human infants. Using both automated tracking and direct human coding, we confirmed chicks’ capacity to recognize the presence of a statistically defined structure underlying a continuous stream of shapes. Using a different chicken strain than previous studies, we were also able to highlight sex differences in chicks’ propensity to approach the familiar or novel sequence. This could also explain a previous failure to reveal statistical learning in chicks which sex was however not determined. Our study confirms chicks’ ability to track visual statistics. The pivotal role of sex in determining familiarity or novelty preferences in this species and the interaction with the animals’ strain highlight the importance to contextualize comparative research within the ecology of each species.


2015 ◽  
Vol 8 (2) ◽  
pp. 277-282 ◽  
Author(s):  
Karolina Janacsek ◽  
Geza Gergely Ambrus ◽  
Walter Paulus ◽  
Andrea Antal ◽  
Dezso Nemeth

2021 ◽  
Author(s):  
Kelly Garner ◽  
Jordan Butler ◽  
Scott Jones ◽  
Paul Edmund Dux

Performing two tasks concurrently typically leads to performance costs. Historically, multitasking costs have been assumed to reflect fundamental constraints of cognitive architectures. A new perspective proposes that multitasking costs reflect information sharing between constituent tasks; shared information gains representational efficiency, at the expense of multitasking capability. We test this theory by determining whether increasing cross-task information harms multitasking. 48 participants performed multitasks where they mapped keypresses to four shapes. In a subsequent statistical learning task, these shapes then formed pairs that were predictive or non-predictive of an upcoming target judgement. When participants again responded to these shapes in the multitasking context, performance was poorer when the shape pair had been predictive of target outcomes in the learning phase, relative to non-predictive. Thus, associating common information to shape pairings transferred to negatively impact multitasking performance, providing the first causal evidence for the shared representational account of multitasking performance.


2018 ◽  
Author(s):  
Hyungwook Yim ◽  
Simon Dennis ◽  
Vladimir Sloutsky

Models of statistical learning do not place constraints on the complexity of the memory structure that is formed during statistical learning, while empirical studies using the statistical learning task have only examined the formation of simple memory structures (e.g., two-way binding). On the other hand, the memory literature, using explicit memory tasks, has shown that people are able to form memory structures of different complexities and that more complex memory structures (e.g., three-way binding) are usually more difficult to form. We examined whether complex memory structures such as three-way bindings can be implicitly formed through statistical learning by utilizing manipulations that have been used in the paired-associate learning paradigm (e.g., AB/ABr condition). Through three experiments, we show that while simple two-way binding structures can be formed implicitly, three-way bindings can only be formed with explicit instructions. The results indicate that explicit attention may be a necessary component in forming three-way memory structures and suggest that existing models should place constraints on the representational structures that can be formed.


2018 ◽  
Author(s):  
Amy Perfors ◽  
Evan Kidd

Humans have the ability to learn surprisingly complicated statistical information in a variety of modalities and situations, often based on relatively little input. These statistical learning (SL) skills appear to underlie many kinds of learning, but despite their ubiquity, we still do not fully understand precisely what SL is and what individual differences on SL tasks reflect. Here we present experimental work suggesting that at least some individual differences arise from variation in perceptual fluency — the ability to rapidly or efficiently code and remember the stimuli that statistical learning occurs over. We show that performance on a standard SL task varies substantially within the same (visual) modality as a function of whether the stimuli involved are familiar or not, independent of stimulus complexity. Moreover, we find that test-retest correlations of performance in a statistical learning task using stimuli of the same level of familiarity (but distinct items) are stronger than correlations across the same task with different levels of familiarity. Finally, we demonstrate that statistical learning performance is predicted by an independent measure of stimulus-specific perceptual fluency which contains no statistical learning component at all. Our results suggest that a key component of SL performance may be unrelated to either domain-specific statistical learning skills or modality-specific perceptual processing.


2021 ◽  
Author(s):  
Julie M. Schneider ◽  
Yi-Lun Weng ◽  
Anqi Hu ◽  
Zhenghan Qi

Statistical learning, the process of tracking distributional information and discovering embedded patterns, is traditionally regarded as a form of implicit learning. However, recent studies proposed that both implicit (attention-independent) and explicit (attention-dependent) learning systems are involved in statistical learning. To understand the role of attention in statistical learning, the current study investigates the cortical processing of prediction errors in speech based on either local or global distributional information. We then ask how these cortical responses relate to statistical learning behavior in a word segmentation task. We found ERP evidence of pre-attentive processing of both the local (mismatching negativity) and global distributional information (late discriminative negativity). However, as speech elements became less frequent and more surprising, some participants showed an involuntary attentional shift, reflected in a P3a response. Individuals who displayed attentive neural tracking of distributional information showed faster learning in a speech statistical learning task. These results provide important neural evidence elucidating the facilitatory role of attention in statistical learning.


2018 ◽  
Author(s):  
Vincent Valton ◽  
Povilas Karvelis ◽  
Katie L. Richards ◽  
Aaron R. Seitz ◽  
Stephen M. Lawrie ◽  
...  

AbstractProminent theories suggest that symptoms of schizophrenia stem from learning deficiencies resulting in distorted internal models of the world. To further test these theories, we here use a visual statistical learning task known to induce rapid implicit learning of the stimulus statistics (Chalk et al., 2010). In this task, participants are presented with a field of coherently moving dots and need to report the presented direction of the dots (estimation task) and whether they saw any dots or not (detection task). Two of the directions were more frequently presented than the others. In controls, the implicit acquisition of the stimuli statistics influences their perception in two ways: 1-motion directions are perceived as being more similar to the most frequently presented directions than they really are (estimation biases); 2-in the absence of stimuli, participants sometimes report perceiving the most frequently presented directions (a form of hallucinations). Such behaviour is consistent with probabilistic inference, i.e. combining learnt perceptual priors with sensory evidence. We investigated whether patients with chronic, stable, treated schizophrenia (n=20) differ from controls (n=23) in the acquisition of the perceptual priors and/or their influence on perception. We found that, although patients were slower than controls, they showed comparable acquisition of perceptual priors, correctly approximating the stimulus statistics. This suggests that patients have no statistical learning deficits in our task. This may reflect our patients relative wellbeing on antipsychotic medication. Intriguingly, however, patients made significantly fewer hallucinations of the most frequently presented directions than controls and fewer prior-based lapse estimations. This suggests that prior expectations had less influence on patients’ perception than on controls when stimuli were absent or below perceptual threshold.


Author(s):  
Christopher W. Robinson ◽  
Vladimir M. Sloutsky

Presenting information to multiple sensory modalities sometimes facilitates and sometimes interferes with processing of this information. Research examining interference effects shows that auditory input often interferes with processing of visual input in young children (i.e., auditory dominance effect), whereas visual input often interferes with auditory processing in adults (i.e., visual dominance effect). The current study used a cross-modal statistical learning task to examine modality dominance in adults. Participants ably learned auditory and visual statistics when auditory and visual sequences were presented unimodally and when auditory and visual sequences were correlated during training. However, increasing task demands resulted in an important asymmetry: Increased task demands attenuated visual statistical learning, while having no effect on auditory statistical learning. These findings are consistent with auditory dominance effects reported in young children and have important implications for our understanding of how sensory modalities interact while learning the structure of cross-modal information.


Sign in / Sign up

Export Citation Format

Share Document