scholarly journals Implicit learning and exploitation of regularities involve hippocampal and prefrontal theta activity

2019 ◽  
Author(s):  
Eelke Spaak ◽  
Floris P. de Lange

AbstractObservers rapidly and seemingly automatically learn to predict where to expect relevant items when those items are repeatedly presented in the same spatial context. This form of statistical learning in visual search has been studied extensively using a paradigm known as contextual cueing. The neural mechanisms underlying the learning and exploiting of such regularities remain unclear. We sought to elucidate these by examining behaviour and recording neural activity using magneto-encephalography (MEG) while observers were implicitly acquiring and exploiting statistical regularities. Computational modelling of behavioural data suggested that after repeated exposures to a spatial context, participants’ behaviour was marked by an abrupt switch to an exploitation strategy of the learnt regularities. MEG recordings showed that the initial learning phase was associated with larger hippocampal theta band activity for repeated scenes, while the subsequent exploitation phase showed larger prefrontal theta band activity for these repeated scenes. Strikingly, the behavioural benefit of repeated exposures to certain scenes was inversely related to explicit awareness of such repeats, demonstrating the implicit nature of the expectations acquired. This elucidates how theta activity in the hippocampus and prefrontal cortex underpins the implicit learning and exploitation of spatial statistical regularities to optimize visual search behaviour.

2001 ◽  
Vol 54 (4) ◽  
pp. 1105-1124 ◽  
Author(s):  
Yuhong Jiang ◽  
Marvin M. Chun

The effect of selective attention on implicit learning was tested in four experiments using the “contextual cueing” paradigm (Chun & Jiang, 1998, 1999). Observers performed visual search through items presented in an attended colour (e.g., red) and an ignored colour (e.g., green). When the spatial configuration of items in the attended colour was invariant and was consistently paired with a target location, visual search was facilitated, showing contextual cueing (Experiments 1, 3, and 4). In contrast, repeating and pairing the configuration of the ignored items with the target location resulted in no contextual cueing (Experiments 2 and 4). We conclude that implicit learning is robust only when relevant, predictive information is selectively attended.


2012 ◽  
Vol 107 (12) ◽  
pp. 3458-3467 ◽  
Author(s):  
Iris Steinmann ◽  
Alexander Gutschalk

Human functional MRI (fMRI) and magnetoencephalography (MEG) studies indicate a pitch-specific area in lateral Heschl's gyrus. Single-cell recordings in monkey suggest that sustained-firing, pitch-specific neurons are located lateral to primary auditory cortex. We reevaluated whether pitch strength contrasts reveal sustained pitch-specific responses in human auditory cortex. Sustained BOLD activity in auditory cortex was found for iterated rippled noise (vs. noise or silence) but not for regular click trains (vs. jittered click trains or silence). In contrast, iterated rippled noise and click trains produced similar pitch responses in MEG. Subsequently performed time-frequency analysis of the MEG data suggested that the dissociation of cortical BOLD activity between iterated rippled noise and click trains is related to theta band activity. It appears that both sustained BOLD and theta activity are associated with slow non-pitch-specific stimulus fluctuations. BOLD activity in the inferior colliculus was sustained for both stimulus types and varied neither with pitch strength nor with the presence of slow stimulus fluctuations. These results suggest that BOLD activity in auditory cortex is much more sensitive to slow stimulus fluctuations than to constant pitch, compromising the accessibility of the latter. In contrast, pitch-related activity in MEG can easily be separated from theta band activity related to slow stimulus fluctuations.


2019 ◽  
Vol 82 (4) ◽  
pp. 1682-1694
Author(s):  
Siyi Chen ◽  
Zhuanghua Shi ◽  
Xuelian Zang ◽  
Xiuna Zhu ◽  
Leonardo Assumpção ◽  
...  

AbstractIt is well established that statistical learning of visual target locations in relation to constantly positioned visual distractors facilitates visual search. In the present study, we investigated whether such a contextual-cueing effect would also work crossmodally, from touch onto vision. Participants responded to the orientation of a visual target singleton presented among seven homogenous visual distractors. Four tactile stimuli, two to different fingers of each hand, were presented either simultaneously with or prior to the visual stimuli. The identity of the stimulated fingers provided the crossmodal context cue: in half of the trials, a given visual target location was consistently paired with a given tactile configuration. The visual stimuli were presented above the unseen fingers, ensuring spatial correspondence between vision and touch. We found no evidence of crossmodal contextual cueing when the two sets of items (tactile, visual) were presented simultaneously (Experiment 1). However, a reliable crossmodal effect emerged when the tactile distractors preceded the onset of visual stimuli 700 ms (Experiment 2). But crossmodal cueing disappeared again when, after an initial learning phase, participants flipped their hands, making the tactile distractors appear at different positions in external space while their somatotopic positions remained unchanged (Experiment 3). In all experiments, participants were unable to explicitly discriminate learned from novel multisensory arrays. These findings indicate that search-facilitating context memory can be established across vision and touch. However, in order to guide visual search, the (predictive) tactile configurations must be remapped from their initial somatotopic into a common external representational format.


2007 ◽  
Vol 60 (10) ◽  
pp. 1321-1328 ◽  
Author(s):  
Valeria Rausei ◽  
Tal Makovski ◽  
Yuhong V. Jiang

How much attention is needed to produce implicit learning? Previous studies have found inconsistent results, with some implicit learning tasks requiring virtually no attention while others rely on attention. In this study we examine the degree of attentional dependency in implicit learning of repeated visual search context. Observers searched for a target among distractors that were either highly similar to the target or dissimilar to the target. We found that the size of contextual cueing was comparable from repetition of the two types of distractors, even though attention dwelled much longer on distractors highly similar to the target. We suggest that beyond a minimal amount, further increase in attentional dwell time does not contribute significantly to implicit learning of repeated search context.


Author(s):  
Markus Conci ◽  
Martina Zellin

AbstractVisual search for a target is faster when the spatial layout of nontarget items is repeatedly encountered, illustrating that learned contextual invariances can improve attentional selection (contextual cueing). This type of contextual learning is usually relatively efficient, but relocating the target to an unexpected location (within otherwise unchanged layouts) typically abolishes contextual cueing. Here, we explored whether bottom-up attentional guidance can mediate the efficient contextual adaptation after the change. Two experiments presented an initial learning phase, followed by a subsequent relocation phase that introduced target location changes. This location change was accompanied by transient attention-guiding signals that either up-modulated the changed target location (Experiment 1), or which provided an inhibitory tag to down-modulate the initial target location (Experiment 2). The results from these two experiments showed reliable contextual cueing both before and after the target location change. By contrast, an additional control experiment (Experiment 3) that did not present any attention-guiding signals together with the changed target showed no reliable cueing in the relocation phase, thus replicating previous findings. This pattern of results suggests that attentional guidance (by transient stimulus-driven facilitatory and inhibitory signals) enhances the flexibility of long-term contextual learning.


2014 ◽  
Vol 72 (9) ◽  
pp. 687-693 ◽  
Author(s):  
Guaraci Ken Tanaka ◽  
Caroline Peressutti ◽  
Silmar Teixeira ◽  
Mauricio Cagy ◽  
Roberto Piedade ◽  
...  

Acute and long-term effects of mindfulness meditation on theta-band activity are not clear. The aim of this study was to investigate frontal theta differences between long- and short-term mindfulness practitioners before, during, and after mindfulness meditation. Twenty participants were recruited, of which 10 were experienced Buddhist meditators. Despite an acute increase in the theta activity during meditation in both the groups, the meditators showed lower trait frontal theta activity. Therefore, we suggested that this finding is a neural correlate of the expert practitioners’ ability to limit the processing of unnecessary information (e.g., discursive thought) and increase the awareness of the essential content of the present experience. In conclusion, acute changes in the theta band throughout meditation did not appear to be a specific correlate of mindfulness but were rather related to the concentration properties of the meditation. Notwithstanding, lower frontal theta activity appeared to be a trait of mindfulness practices.


2020 ◽  
Author(s):  
Floortje G. Bouwkamp ◽  
Floris P. de Lange ◽  
Eelke Spaak

AbstractThe human visual system can rapidly extract regularities from our visual environment, generating predictive context. It has been shown that spatial predictive context can be used during visual search. We set out to see whether observers can additionally exploit temporal predictive context, using an extended version of a contextual cueing paradigm. Though we replicated the contextual cueing effect, repeating search scenes in a structured order versus a random order yielded no additional behavioural benefit. This was true both for participants who were sensitive to spatial predictive context, and for those who were not. We argue that spatial predictive context during visual search is more readily learned and subsequently exploited than temporal predictive context, potentially rendering the latter redundant. In conclusion, unlike spatial context, temporal context is not automatically extracted and used during visual search.


2021 ◽  
Vol 8 (3) ◽  
Author(s):  
Floortje G. Bouwkamp ◽  
Floris P. de Lange ◽  
Eelke Spaak

The human visual system can rapidly extract regularities from our visual environment, generating predictive context. It has been shown that spatial predictive context can be used during visual search. We set out to see whether observers can additionally exploit temporal predictive context based on sequence order, using an extended version of a contextual cueing paradigm. Though we replicated the contextual cueing effect, repeating search scenes in a structured order versus a random order yielded no additional behavioural benefit. This was also true when we looked specifically at participants who revealed a sensitivity to spatial predictive context. We argue that spatial predictive context during visual search is more readily learned and subsequently exploited than temporal predictive context, potentially rendering the latter redundant. In conclusion, unlike spatial context, temporal context is not automatically extracted and used during visual search.


Sign in / Sign up

Export Citation Format

Share Document