Reversible second-order conditional sequences in incidental sequence learning tasks

2018 ◽  
Vol 72 (5) ◽  
pp. 1164-1175 ◽  
Author(s):  
Antoine Pasquali ◽  
Axel Cleeremans ◽  
Vinciane Gaillard

In sequence learning tasks, participants’ sensitivity to the sequential structure of a series of events often overshoots their ability to express relevant knowledge intentionally, as in generation tasks that require participants to produce either the next element of a sequence (inclusion) or a different element (exclusion). Comparing generation performance under inclusion and exclusion conditions makes it possible to assess the respective influences of conscious and unconscious learning. Recently, two main concerns have been expressed concerning such tasks. First, it is often difficult to design control sequences in such a way that they enable clear comparisons with the training material. Second, it is challenging to ask participants to perform appropriately under exclusion instructions, for the requirement to exclude familiar responses often leads them to adopt degenerate strategies (e.g., pushing on the same key all the time), which then need to be specifically singled out as invalid. To overcome both concerns, we introduce reversible second-order conditional (RSOC) sequences and show (a) that they elicit particularly strong transfer effects, (b) that dissociation of implicit and explicit influences becomes possible thanks to the removal of salient transitions in RSOCs, and (c) that exclusion instructions can be greatly simplified without losing sensitivity.

2000 ◽  
Author(s):  
Joanna Salidas ◽  
Daniel B. Willingham ◽  
John D. E. Gabrieli

2016 ◽  
Author(s):  
Marius Barth ◽  
Christoph Stahl ◽  
Hilde Haider

In implicit sequence learning, a process-dissociation (PD) approach has been proposed to dissociate implicit and explicit learning processes. Applied to the popular generation task, participants perform two different task versions: inclusion instructions require generating the transitions that form the learned sequence; exclusion instructions require generating transitions other than those of the learned sequence. Whereas accurate performance under inclusion may be based on either implicit or explicit knowledge, avoiding to generate learned transitions requires controllable explicit sequence knowledge. The PD approach yields separate estimates of explicit and implicit knowledge that are derived from the same task; it therefore avoids many problems of previous measurement approaches. However, the PD approach rests on the critical assumption that the implicit and explicit processes are invariant across inclusion and exclusion conditions. We tested whether the invariance assumptions hold for the PD generation task. Across three studies using first-order as well as second-order regularities, invariance of the controlled process was found to be violated. In particular, despite extensive amounts of practice, explicit knowledge was not exhaustively expressed in the exclusion condition. We discuss the implications of these findings for the use of process-dissociation in assessing implicit knowledge.


1995 ◽  
Vol 3 (4) ◽  
pp. 271-286 ◽  
Author(s):  
Scott L. Rauch ◽  
Cary R. Savage ◽  
Halle D. Brown ◽  
Tim Curran ◽  
Nathaniel M. Alpert ◽  
...  

PLoS ONE ◽  
2019 ◽  
Vol 14 (9) ◽  
pp. e0221966 ◽  
Author(s):  
Emese Szegedi-Hallgató ◽  
Karolina Janacsek ◽  
Dezso Nemeth

2021 ◽  
Author(s):  
Santiago Herce Castañón ◽  
Pedro Cardoso-Leite ◽  
Irene Altarelli ◽  
C. Shawn Green ◽  
Paul Schrater ◽  
...  

AbstractWhat role do generative models play in generalization of learning in humans? Our novel multi-task prediction paradigm—where participants complete four sequence learning tasks, each being a different instance of a common generative family—allows the separate study of within-task learning (i.e., finding the solution to each of the tasks), and across-task learning (i.e., learning a task differently because of past experiences). The very first responses participants make in each task are not yet affected by within-task learning and thus reflect their priors. Our results show that these priors change across successive tasks, increasingly resembling the underlying generative family. We conceptualize multi-task learning as arising from a mixture-of-generative-models learning strategy, whereby participants simultaneously entertain multiple candidate models which compete against each other to explain the experienced sequences. This framework predicts specific error patterns, as well as a gating mechanism for learning, both of which are observed in the data.


Sign in / Sign up

Export Citation Format

Share Document