scholarly journals Integrating when and what information in the left parietal lobe allows language rule generalization

PLoS Biology ◽  
2020 ◽  
Vol 18 (11) ◽  
pp. e3000895
Author(s):  
Joan Orpella ◽  
Pablo Ripollés ◽  
Manuela Ruzzoli ◽  
Julià L. Amengual ◽  
Alicia Callejas ◽  
...  

A crucial aspect when learning a language is discovering the rules that govern how words are combined in order to convey meanings. Because rules are characterized by sequential co-occurrences between elements (e.g., “These cupcakes are unbelievable”), tracking the statistical relationships between these elements is fundamental. However, purely bottom-up statistical learning alone cannot fully account for the ability to create abstract rule representations that can be generalized, a paramount requirement of linguistic rules. Here, we provide evidence that, after the statistical relations between words have been extracted, the engagement of goal-directed attention is key to enable rule generalization. Incidental learning performance during a rule-learning task on an artificial language revealed a progressive shift from statistical learning to goal-directed attention. In addition, and consistent with the recruitment of attention, functional MRI (fMRI) analyses of late learning stages showed left parietal activity within a broad bilateral dorsal frontoparietal network. Critically, repetitive transcranial magnetic stimulation (rTMS) on participants’ peak of activation within the left parietal cortex impaired their ability to generalize learned rules to a structurally analogous new language. No stimulation or rTMS on a nonrelevant brain region did not have the same interfering effect on generalization. Performance on an additional attentional task showed that this rTMS on the parietal site hindered participants’ ability to integrate “what” (stimulus identity) and “when” (stimulus timing) information about an expected target. The present findings suggest that learning rules from speech is a two-stage process: following statistical learning, goal-directed attention—involving left parietal regions—integrates “what” and “when” stimulus information to facilitate rapid rule generalization.

2019 ◽  
Author(s):  
J Orpella ◽  
P Ripollés ◽  
M Ruzzoli ◽  
JL Amengual ◽  
A Callejas ◽  
...  

AbstractA crucial aspect when learning a language is discovering the rules that govern how words are combined in order to convey meanings. Since rules are characterized by sequential co-occurrences between elements (e.g. ‘These cupcakes are unbelievable’), tracking the statistical relationships between these elements is fundamental. However, statistical learning alone cannot fully account for the ability to create abstract rule representations that can be generalized, a paramount requirement of linguistic rules. Here, we provide evidence that, after the statistical relations between words have been extracted, the engagement of goal-directed attention is key to enable rule generalization. Incidental learning performance during a rule-learning task on an artificial language revealed a progressive shift from statistical learning to goal-directed attention. In addition, and consistent with the recruitment of attention, fMRI analyses of late learning stages showed left parietal activity within a broad bilateral dorsal fronto-parietal network. Critically, rTMS on participants’ peak of activation within the left parietal cortex impaired their ability to generalize learned rules to a structurally analogous new language. No stimulation or rTMS on a non-relevant brain region did not have the same interfering effect on generalization. Performance on an additional attentional task showed that rTMS on the same parietal site hindered participants’ ability to integrate what (stimulus identity) and when (stimulus timing) information about an expected target. The present findings suggest that learning rules from speech is a two-stage process: following statistical learning, goal-directed attention –involving left parietal regions– integrates what and when stimulus information to facilitate rapid rule generalization.


2018 ◽  
Author(s):  
Amy Perfors ◽  
Evan Kidd

Humans have the ability to learn surprisingly complicated statistical information in a variety of modalities and situations, often based on relatively little input. These statistical learning (SL) skills appear to underlie many kinds of learning, but despite their ubiquity, we still do not fully understand precisely what SL is and what individual differences on SL tasks reflect. Here we present experimental work suggesting that at least some individual differences arise from variation in perceptual fluency — the ability to rapidly or efficiently code and remember the stimuli that statistical learning occurs over. We show that performance on a standard SL task varies substantially within the same (visual) modality as a function of whether the stimuli involved are familiar or not, independent of stimulus complexity. Moreover, we find that test-retest correlations of performance in a statistical learning task using stimuli of the same level of familiarity (but distinct items) are stronger than correlations across the same task with different levels of familiarity. Finally, we demonstrate that statistical learning performance is predicted by an independent measure of stimulus-specific perceptual fluency which contains no statistical learning component at all. Our results suggest that a key component of SL performance may be unrelated to either domain-specific statistical learning skills or modality-specific perceptual processing.


2021 ◽  
Vol 26 (3) ◽  
pp. 558-567
Author(s):  
Dongsun Yim ◽  
Yoonhee Yang

Objectives: If statistical learning ability is critical for language acquisition and language development, it is necessary to confirm whether enhancing statistical learning ability can improve the children’s language skills. The present study investigated whether children with and without vocabulary delay (VD) show a difference in improving statistical learning (SL) tasks manipulated with implicit, implicit*2 and explicit conditions, and with visual and auditory domains; and also explores the relationship among SL, vocabulary, and quick incidental learning (QUIL).Methods: A total of 132 children between 3 to 8 years participated in this study, including vocabulary delayed children (N= 34) and typically developing children (N = 98). Participants completed SL tasks which were composed of three exposure conditions, and Quick incidental learning (QUIL) tasks to tap the novel word learning ability.Results: The VD group score was significantly lower than the TD group in the explicit condition of the auditory statistical learning task, and there was a significant correlation between QUIL and SL_auditory (implicit*2) only in the TD group.Conclusion: These results may explain that the TD group was ready to accept the explicit cues for learning as a domain-specific (auditory) benefit, and their auditory SL ability can be closely linked to vocabulary abilities. The current study suggests one possibility; that the VD group can increase the statistical learning ability through double auditory exposures. The novel quick incidental learning in the TD group was supported by the statistical learning, but this was not seen in the VD group.


2020 ◽  
Author(s):  
Andrew Perfors ◽  
Evan Kidd

Humans have the ability to learn surprisingly complicated statistical information in avariety of modalities and situations, often based on relatively little input. These statistical learning (SL) skills appear to underlie many kinds of learning, but despite their ubiquity, we still do not fully understand precisely what SL is and what individual differences on SL tasks reflect. Here we present experimental work suggesting that at least some individual differences arise from variation in perceptual fluency — the ability to rapidly or efficiently code and remember the stimuli that statistical learning occurs over – and that perceptual fluency is driven at least in part by stimulus familiarity: performance on a standard SL task varies substantially within the same (visual) modality as a function of whether the stimuli involved are familiar or not, independent of stimulus complexity. Moreover, we find that test-retest correlations of performance in a statistical learning task using stimuli of the same level of familiarity (but distinct items) are stronger than correlations across the same task with stimuli of different levels of familiarity. Finally, we demonstrate that statistical learning performance is predicted by an independent measure of stimulus-specific perceptual fluency that contains no statistical learning component at all. Our results suggest that a key component of statistical learning performance may be related to stimulus-specific perceptual processing and familiarity.


2020 ◽  
Author(s):  
T. Bryan Jackson ◽  
Ted Maldonado ◽  
Sydney M. Eakin ◽  
Joseph M. Orr ◽  
Jessica A. Bernard

ABSTRACTTo date most aging research has focused on cortical systems and networks, ignoring the cerebellum which has been implicated in both cognitive and motor function. Critically, older adults (OA) show marked differences in cerebellar volume and functional networks, suggesting it may play a key role in the behavioral differences observed in advanced age. OA may be less able to recruit cerebellar resources due to network and structural differences. Here, 26 young adults (YA) and 25 OA performed a second-order learning task, known to activate the cerebellum in the fMRI environment. Behavioral results indicated that YA performed significantly better and learned more quickly compared to OA. Functional imaging detailed robust parietal and cerebellar activity during learning (compared to control) blocks within each group. OA showed increased activity (relative to YA) in the left inferior parietal lobe in response to instruction cues during learning (compared to control); whereas, YA showed increased activity (relative to OA) in the left anterior cingulate to feedback cues during learning, potentially explaining age-related performance differences. Visual interpretation of effect size maps showed more bilateral posterior cerebellar activation in OA compared to YA during learning blocks, but early learning showed widespread cerebellar activation in YA compared to OA. There were qualitatively large age-related differences in cerebellar recruitment in terms of effect sizes, yet no statistical difference. These findings serve to further elucidate age-related differences and similarities in cerebellar and cortical brain function and implicate the cerebellum and its networks as regions of interest in aging research.


F1000Research ◽  
2013 ◽  
Vol 2 ◽  
pp. 180 ◽  
Author(s):  
Matthew Sykes ◽  
Kalina Makowiecki ◽  
Jennifer Rodger

Repetitive transcranial magnetic stimulation (rTMS) is thought to facilitate brain plasticity. However, few studies address anatomical changes following rTMS in relation to behaviour. We delivered 5 weeks of daily pulsed rTMS stimulation to ephrin-A2-/- and wildtype mice (n=10 per genotype) undergoing a visual learning task and analysed learning performance, as well as spine density, in the dentate gyrus molecular and CA1 pyramidal cell layers in Golgi-stained brain sections. We found that neither learning behaviour, nor hippocampal spine density was affected by long term rTMS. Our negative results highlight the lack of deleterious side effects in normal subjects and are consistent with previous studies suggesting that rTMS has a bigger effect on abnormal or injured brain substrates than on normal/control structures.


2019 ◽  
Author(s):  
Noam Siegelman ◽  
Louisa Bogaerts ◽  
Amit Elazar ◽  
Joanne Arciuli ◽  
Ram Frost

Statistical Learning (SL) is typically considered to be a domain-general mechanism by which cognitive systems discover the underlying statistical regularities in the input. Recent findings, however, show clear differences in processing regularities across modalities and stimuli as well as low correlations between performance on visual and auditory tasks. Why does a presumably domain-general mechanism show distinct patterns of modality and stimulus specificity? Here we claim that the key to this puzzle lies in the prior knowledge brought upon by learners to the learning task. Specifically, we argue that learners’ already entrenched expectations about speech co-occurrences from their native language impacts what they learn from novel auditory verbal input. In contrast, learners are free of such entrenchment when processing sequences of visual material such as abstract shapes. We present evidence from three experiments supporting this hypothesis by showing that auditory-verbal tasks display distinct item-specific effects resulting in low correlations between test items. In contrast, non-verbal tasks – visual and auditory – show high correlations between items. Importantly, we also show that individual performance in visual and auditory SL tasks that do not implicate prior knowledge regarding co-occurrence of elements, is highly correlated. In a fourth experiment, we present further support for the entrenchment hypothesis by showing that the variance in performance between different stimuli in auditory-verbal statistical learning tasks can be traced back to their resemblance to participants' native language. We discuss the methodological and theoretical implications of these findings, focusing on models of domain generality/specificity of SL.


PLoS ONE ◽  
2021 ◽  
Vol 16 (1) ◽  
pp. e0244954
Author(s):  
Leyla Eghbalzad ◽  
Joanne A. Deocampo ◽  
Christopher M. Conway

Language is acquired in part through statistical learning abilities that encode environmental regularities. Language development is also heavily influenced by social environmental factors such as socioeconomic status. However, it is unknown to what extent statistical learning interacts with SES to affect language outcomes. We measured event-related potentials in 26 children aged 8–12 while they performed a visual statistical learning task. Regression analyses indicated that children’s learning performance moderated the relationship between socioeconomic status and both syntactic and vocabulary language comprehension scores. For children demonstrating high learning, socioeconomic status had a weaker effect on language compared to children showing low learning. These results suggest that high statistical learning ability can provide a buffer against the disadvantages associated with being raised in a lower socioeconomic status household.


F1000Research ◽  
2013 ◽  
Vol 2 ◽  
pp. 180 ◽  
Author(s):  
Matthew Sykes ◽  
Kalina Makowiecki ◽  
Jennifer Rodger

Repetitive transcranial magnetic stimulation (rTMS) is thought to facilitate brain plasticity. However, few studies address anatomical changes following rTMS in relation to behaviour. We delivered 5 weeks of daily pulsed rTMS stimulation to adult ephrin-A2-/- and wildtype (C57BI/6j) mice (n=10 per genotype) undergoing a visual learning task and analysed learning performance, as well as spine density, in the dentate gyrus molecular and CA1 pyramidal cell layers in Golgi-stained brain sections. We found that neither learning behaviour, nor hippocampal spine density was affected by long term rTMS. Our negative results highlight the lack of deleterious side effects in normal subjects and are consistent with previous studies suggesting that rTMS has a bigger effect on abnormal or injured brain substrates than on normal/control structures.


Sign in / Sign up

Export Citation Format

Share Document