scholarly journals Multiple components of statistical word learning are resource dependent: Evidence from a dual-task learning paradigm

Author(s):  
Tanja C Roembke ◽  
Bob McMurray

AbstractIt is increasingly understood that people may learn new word/object mappings in part via a form of statistical learning in which they track co-occurrences between words and objects across situations (cross-situational learning). Multiple learning processes contribute to this, thought to reflect the simultaneous influence of real-time hypothesis testing and graduate learning. It is unclear how these processes interact, and if any require explicit cognitive resources. To manipulate the availability of working memory resources for explicit processing, participants completed a dual-task paradigm in which a cross-situational word-learning task was interleaved with a short-term memory task. We then used trial-by-trial analyses to estimate how different learning processes that play out simultaneously are impacted by resource availability. Critically, we found that the effect of hypothesis testing and gradual learning effects showed a small reduction under limited resources, and that the effect of memory load was not fully mediated by these processes. This suggests that neither is purely explicit, and there may be additional resource-dependent processes at play. Consistent with a hybrid account, these findings suggest that these two aspects of learning may reflect different aspects of a single system gated by attention, rather than competing learning systems.

Author(s):  
Saima Noreen ◽  
Jan W. de Fockert

Abstract. We investigated the role of cognitive control in intentional forgetting by manipulating working memory load during the think/no-think task. In two experiments, participants learned a series of cue-target word pairs and were asked to recall the target words associated with some cues or to avoid thinking about the target associated with other cues. In addition to this, participants also performed a modified version of the n-back task which required them to respond to the identity of a single target letter present in the currently presented cue word (n = 0 condition, low working memory load), and in either the previous cue word (n = 1 condition, high working memory load, Experiment 1) or the cue word presented two trials previously (n = 2 condition, high working memory load, Experiment 2). Participants’ memory for the target words was subsequently tested using same and novel independent probes. In both experiments it was found that although participants were successful at forgetting on both the same and independent-probe tests in the low working memory load condition, they were only successful at forgetting on the same-probe test in the high working memory load condition. We argue that our findings suggest that the high load working memory task diverted attention from direct suppression and acted as an interference-based strategy. Thus, when cognitive resources are limited participants can switch between the strategies they use to prevent unwanted memories from coming to mind.


2019 ◽  
Author(s):  
Wim Pouw ◽  
Gertjan Rop ◽  
Bjorn de Koning ◽  
Fred Paas

The split-attention effect entails that learning from spatially separated, but mutually referring information sources (e.g., text and picture) is less effective than learning from the equivalent spatially integrated sources. According to cognitive load theory, impaired learning is caused by the working memory load imposed by the need to distribute attention between the information sources and mentally integrate them. In this study, we directly tested whether the split-attention effect is caused by spatial separation per se. Spatial distance was varied in basic cognitive tasks involving pictures (Experiment 1) and text-picture combinations (Experiment 2; pre-registered study), and in more ecologically valid learning materials (Experiment 3). Experiment 1 showed that having to integrate two pictorial stimuli at greater distances diminished performance on a secondary visual working memory task, but did not lead to slower integration. When participants had to integrate a picture and written text in Experiment 2, a greater distance led to slower integration of the stimuli, but not to diminished performance on the secondary task. Experiment 3 showed that presenting spatially separated (compared to integrated) textual and pictorial information yielded fewer integrative eye movements, but this was not further exacerbated when increasing spatial distance even further. This effect on learning processes did not lead to differences in learning outcomes between conditions. In conclusion, we provide evidence that larger distances between spatially separated information sources influence learning processes, but that spatial separation on its own is not likely to be the only, nor a sufficient, condition for impacting learning outcomes.


2017 ◽  
Vol 39 (1) ◽  
pp. 1-35 ◽  
Author(s):  
MARTA MARECKA ◽  
JAKUB SZEWCZYK ◽  
ANNA JELEC ◽  
DONATA JANISZEWSKA ◽  
KAROLINA RATAJ ◽  
...  

ABSTRACTTo acquire a new word, learners need to create its representation in phonological short-term memory (STM) and then encode it in their long-term memory. Two strategies can enable word representation in STM: universal segmentation and phonological mapping. Universal segmentation is language universal and thus should predict word learning in any language, while phonological mapping is language specific. This study investigates the mechanisms of vocabulary learning through a comparison of vocabulary learning task results in multiple languages. We tested 44 Polish third graders learning English on phonological STM, phonological awareness in Polish and in English, and on three tasks, which involved learning novel word forms in Polish (first language), in English (second language), and in a language that did not resemble any language known to participants (an unknown language). Participants’ English proficiency was controlled through a vocabulary task. The results suggest that word learning engages different mechanisms for familiar and unfamiliar languages. Phonological awareness in English predicted learning second language and unknown language words, and phonological STM predicted learning words of the unknown language. We propose that universal segmentation facilitates word learning only in an unfamiliar language, while in familiar languages speakers use phonological mapping in order to learn new words.


2015 ◽  
Vol 233 (10) ◽  
pp. 3023-3038 ◽  
Author(s):  
Miodrag Stokić ◽  
Dragan Milovanović ◽  
Miloš R. Ljubisavljević ◽  
Vanja Nenadović ◽  
Milena Čukić

2020 ◽  
Author(s):  
Tanja Roembke ◽  
Bob McMurray

Both explicit and implicit learning processes contribute to cross-situational word learning (e.g., Roembke & McMurray, 2016; Warren et al., 2019). However, it is unclear how these learning processes interact, and if any specific aspect of cross-situational word learning is purely explicit. To investigate this, participants completed cross-situational word learning trials as well as a memory task that required remembering five (high-load) or only one (low-load) number in a between-subject, dual-task paradigm. This allowed us to manipulate whether working memory resources were available for explicit processing or not. Further, we used trial-by-trial analyses to estimate how different learning effects that are thought to map onto either explicit or implicit learning processes are affected by condition. Word learning accuracy was lower in the high-load than in the low-load condition; this was likely driven by performance late in the experiment. Moreover, both the more explicit and implicit effects were reduced when limiting working memory resources, suggesting that neither is purely the result of or independent of explicit learning processes. Consistent with a hybrid account, these findings indicate that explicit and implicit learning processes do not compete, but rather support each other, during cross-situational word learning.


2001 ◽  
Vol 24 (1) ◽  
pp. 143-144 ◽  
Author(s):  
Bart Rypma ◽  
John D.E. Gabrieli

Cowan argues that the true short-term memory (STM) capacity limit is about 4 items. Functional neuroimaging data converge with this conclusion, indicating distinct neural activity patterns depending on whether or not memory task-demands exceed this limit. STM for verbal information within that capacity invokes focal prefrontal cortical activation that increases with memory load. STM for verbal information exceeding that capacity invokes widespread prefrontal activation in regions associated with executive and attentional processes that may mediate chunking processes to accommodate STM capacity limits.


Author(s):  
Patrick Bonin ◽  
Margaux Gelin ◽  
Betty Laroche ◽  
Alain Méot ◽  
Aurélia Bugaiska

Abstract. Animates are better remembered than inanimates. According to the adaptive view of human memory ( Nairne, 2010 ; Nairne & Pandeirada, 2010a , 2010b ), this observation results from the fact that animates are more important for survival than inanimates. This ultimate explanation of animacy effects has to be complemented by proximate explanations. Moreover, animacy currently represents an uncontrolled word characteristic in most cognitive research ( VanArsdall, Nairne, Pandeirada, & Cogdill, 2015 ). In four studies, we therefore investigated the “how” of animacy effects. Study 1 revealed that words denoting animates were recalled better than those referring to inanimates in an intentional memory task. Study 2 revealed that adding a concurrent memory load when processing words for the animacy dimension did not impede the animacy effect on recall rates. Study 3A was an exact replication of Study 2 and Study 3B used a higher concurrent memory load. In these two follow-up studies, animacy effects on recall performance were again not altered by a concurrent memory load. Finally, Study 4 showed that using interactive imagery to encode animate and inanimate words did not alter the recall rate of animate words but did increase the recall of inanimate words. Taken together, the findings suggest that imagery processes contribute to these effects.


2020 ◽  
Author(s):  
Nicholas Harp ◽  
Michael D. Dodd ◽  
Maital Neta

Cognitive resources are needed for successful executive functioning; when resources are limited due to competing demands, task performance is impaired. Although some tasks are accomplished with relatively few resources (e.g., judging trustworthiness and emotion in others), others are more complex. Specifically, in the face of emotional ambiguity (i.e., stimuli that do not convey a clear positive or negative meaning, such as a surprised facial expression), our decisions to approach or avoid appear to rely on the availability of top-down regulatory resources to overcome an initial negativity bias. Cognition-emotion interaction theories (e.g., dual competition) posit that emotion and executive processing rely on shared resources, suggesting that competing demands would hamper these regulatory responses towards emotional ambiguity. Here, we employed a 2x2 design to investigate the effects of load (low versus high) and domain (non-emotional vs. emotional) on evaluations of surprised faces. As predicted, there were domain-specific effects, such that categorizations of surprise were more negative for emotional than non-emotional loads. Consistent with prior work, low load (regardless of domain; i.e., domain-general) was associated with greater response competition on trials resulting in a positive categorization, showing that positive categorizations are characterized by an initial negativity. This effect was diminished under high load. These results lend insight into the resources supporting a positive valence bias by demonstrating that emotion-specific regulatory resources are important for overriding the initial negativity in response to emotional ambiguity. However, both domain-general and domain-specific loads impact the underlying processes.


Sign in / Sign up

Export Citation Format

Share Document