scholarly journals Eye movements in real-life search are guided by task-irrelevant working-memory content

2020 ◽  
Author(s):  
Cherie Zhou ◽  
Monicque M. Lorist ◽  
Sebastiaan Mathôt

AbstractAttention is automatically guided towards stimuli that match the contents of working memory. This has been studied extensively using simplified computer tasks, but it has never been investigated whether (yet often assumed that) memory-driven guidance also affects real-life search. Here we tested this open question in a naturalistic environment that closely resembles real life. In two experiments, participants wore a mobile eye-tracker, and memorized a color, prior to a search task in which they looked for a target word among book covers on a bookshelf. The memory color was irrelevant to the search task. Nevertheless, we found that participants’ gaze was strongly guided towards book covers that matched the memory color. Crucially, this memory-driven guidance was evident from the very start of the search period. These findings support that attention is guided towards working-memory content in real-world search, and that this is fast and therefore likely reflecting an automatic process.Significance statementA core concept in the field of visual working memory (VWM) is that visual attention is automatically guided towards things that resemble the content of VWM. For example, if you hold the color red in VWM, your attention and gaze would automatically be drawn towards red things in the environment. So far, studies on such memory-driven guidance have only been done with well-controlled computer tasks that used simplified search displays. Here we address the crucial and open question of whether attention is guided by the content of VWM in a naturalistic environment that closely resembles real life. To do so, we conducted two experiments with mobile eye tracking. Crucially, we found strong memory-driven guidance from the very early phase of the search, reflecting that this is a fast, and therefore likely automatic, process that also driven visual search in real life.

2021 ◽  
Vol 9 (1) ◽  
Author(s):  
Jessica McFadyen ◽  
Christopher Nolan ◽  
Ellen Pinocy ◽  
David Buteri ◽  
Oliver Baumann

Abstract Background The ‘doorway effect’, or ‘location updating effect’, claims that we tend to forget items of recent significance immediately after crossing a boundary. Previous research suggests that such a forgetting effect occurs both at physical boundaries (e.g., moving from one room to another via a door) and metaphysical boundaries (e.g., imagining traversing a doorway, or even when moving from one desktop window to another on a computer). Here, we aimed to conceptually replicate this effect using virtual and physical environments. Methods Across four experiments, we measured participants’ hit and false alarm rates to memory probes for items recently encountered either in the same or previous room. Experiments 1 and 2 used highly immersive virtual reality without and with working memory load (Experiments 1 and 2, respectively). Experiment 3 used passive video watching and Experiment 4 used active real-life movement. Data analysis was conducted using frequentist as well as Bayesian inference statistics. Results Across this series of experiments, we observed no significant effect of doorways on forgetting. In Experiment 2, however, signal detection was impaired when participants responded to probes after moving through doorways, such that false alarm rates were increased for mismatched recognition probes. Thus, under working memory load, memory was more susceptible to interference after moving through doorways. Conclusions This study presents evidence that is inconsistent with the location updating effect as it has previously been reported. Our findings call into question the generalisability and robustness of this effect to slight paradigm alterations and, indeed, what factors contributed to the effect observed in previous studies.


2021 ◽  
Author(s):  
Oliver Ratcliffe ◽  
Kimron Shapiro ◽  
Bernhard P. Staresina

AbstractHow does the human brain manage multiple bits of information to guide goal-directed behaviour? Successful working memory (WM) functioning has consistently been linked to oscillatory power in the theta frequency band (4-8 Hz) over fronto-medial cortex (fronto-medial theta, FMT). Specifically, FMT is thought to reflect the mechanism of an executive sub-system that coordinates maintenance of memory contents in posterior regions. However, direct evidence for the role of FMT in controlling specific WM content is lacking. Here we collected high-density Electroencephalography (EEG) data whilst participants engaged in load-varying WM tasks and then used multivariate decoding methods to examine WM content during the maintenance period. Higher WM load elicited a focal increase in FMT. Importantly, decoding of WM content was driven by posterior/parietal sites, which in turn showed load-induced functional theta coupling with fronto-medial cortex. Finally, we observed a significant slowing of FMT frequency with increasing WM load, consistent with the hypothesised broadening of a theta ‘duty cycle’ to accommodate additional WM items. Together these findings demonstrate that frontal theta orchestrates posterior maintenance of WM content. Moreover, the observed frequency slowing elucidates the function of FMT oscillations by specifically supporting phase-coding accounts of WM.Significance StatementHow does the brain juggle the maintenance of multiple items in working memory (WM)? Here we show that increased WM demands increase theta power (4-8 Hz) in fronto-medial cortex. Interestingly, using a machine learning approach, we found that the content held in WM could be read out not from frontal, but from posterior areas. These areas were in turn functionally coupled with fronto-medial cortex, consistent with the idea that frontal cortex orchestrates WM representations in posterior regions. Finally, we observed that holding an additional item in WM leads to significant slowing of the frontal theta rhythm, supporting computational models that postulate longer ‘duty cycles’ to accommodate additional WM demands.


2021 ◽  
Vol 15 ◽  
Author(s):  
Francesca Borgnis ◽  
Francesca Baglio ◽  
Elisa Pedroli ◽  
Federica Rossetto ◽  
Giuseppe Riva ◽  
...  

Executive dysfunctions constitute a significant public health problem due to their high impact on everyday life and personal independence. Therefore, the identification of early strategies to assess and rehabilitate these impairments appears to be a priority. The ecological limitations of traditional neuropsychological tests and the numerous difficulties in administering tests in real-life scenarios have led to the increasing use of virtual reality (VR) and 360° environment-based tools for assessing executive functions (EFs) in real life. This perspective aims at proposing the development and implementation of Executive-functions Innovative Tool 360° (EXIT 360°), an innovative, enjoyable, and ecologically valid tool for a multidimensional and multicomponent evaluation of executive dysfunctions. EXIT 360° allows a complete and integrated executive functioning assessment through an original task for EFs delivered via a mobile-powered VR headset combined with eye tracker (ET) and electroencephalograms (EEG). Our tool is born as a 360°-based instrument, easily accessible and clinically usable, that will radically transform clinicians’ and patient’s assessment experience. In EXIT 360°, patients are engaged in a “game for health,” where they must perform everyday subtasks in 360° daily life environments. In this way, the clinicians can obtain quickly more ecologically valid information about several aspects of EFs (e.g., planning, problem-solving). Moreover, the multimodal approach allows completing the assessment of EFs by integrating verbal responses, reaction times, and physiological data (eye movements and brain activation). Overall, EXIT 360° will allow obtaining simultaneously and in real time more information about executive dysfunction and its impact in real life, allowing clinicians to tailor the rehabilitation to the subject’s needs.


2021 ◽  
Vol 26 (3) ◽  
pp. 283-295
Author(s):  
Nathaniel L. Foster ◽  
Gregory R. Bell

We examined incidental learning of road signs under divided attention in a simulated naturalistic environment. We tested whether word-based versus symbol-based road signs were differentially maintained in working memory by dividing attention during encoding and measuring the effect on long-term memory. Participants in a lab watched a video from the point of view of a car driving the streets of a small town. Participants were instructed to indicate whether passing road signs in the video were on the left or right side of the street while either singing the Star-Spangled Banner (phonological divided attention) or describing familiar locations (visuospatial divided attention). For purposes of analysis, road signs were categorized as word signs if they contained words (e.g., a STOP sign) or as symbol signs if they contained illustrations or symbols (e.g., a pedestrian crosswalk sign). A surprise free recall test of the road signs indicated greater recall for word signs than symbol signs, and greater recall of signs for the phonological divided attention group than the visuospatial divided attention group. Critically, the proportion of correct recall of symbol signs was significantly lower for the visuospatial divided attention group than the phonological divided attention group, p = .02, d = 0.63, but recall for word signs was not significantly different between phonological and visuospatial groups, p = .09, d = 0.44. Results supported the hypothesis that visuospatial information—but not phonological information—is stored in working memory in a simulated naturalistic environment that involved incidental learning.


Neuron ◽  
2018 ◽  
Vol 99 (3) ◽  
pp. 588-597.e5 ◽  
Author(s):  
Simon Nikolas Jacob ◽  
Daniel Hähnke ◽  
Andreas Nieder

2012 ◽  
Vol 25 (0) ◽  
pp. 117 ◽  
Author(s):  
Yi-Chuan Chen ◽  
Gert Westermann

Infants are able to learn novel associations between visual objects and auditory linguistic labels (such as a dog and the sound /dɔg/) by the end of their first year of life. Surprisingly, at this age they seem to fail to learn the associations between visual objects and natural sounds (such as a dog and its barking sound). Researchers have therefore suggested that linguistic learning is special (Fulkerson and Waxman, 2007) or that unfamiliar sounds overshadow visual object processing (Robinson and Sloutsky, 2010). However, in previous studies visual stimuli were paired with arbitrary sounds in contexts lacking ecological validity. In the present study, we create animations of two novel animals and two realistic animal calls to construct two audiovisual stimuli. In the training phase, each animal was presented in motions that mimicked animal behaviour in real life: in a short movie, the animal ran (or jumped) from the periphery to the center of the monitor, and it made calls while raising its head. In the test phase, static images of both animals were presented side-by-side and the sound for one of the animals was played. Infant looking times to each stimulus were recorded with an eye tracker. We found that following the sound, 12-month-old infants preferentially looked at the animal corresponding to the sound. These results show that 12-month-old infants are able to learn novel associations between visual objects and natural sounds in an ecologically valid situation, thereby challenging our current understanding of the development of crossmodal association learning.


2020 ◽  
Vol 7 (8) ◽  
pp. 190228 ◽  
Author(s):  
Quan Wan ◽  
Ying Cai ◽  
Jason Samaha ◽  
Bradley R. Postle

How does the neural representation of visual working memory content vary with behavioural priority? To address this, we recorded electroencephalography (EEG) while subjects performed a continuous-performance 2-back working memory task with oriented-grating stimuli. We tracked the transition of the neural representation of an item ( n ) from its initial encoding, to the status of ‘unprioritized memory item' (UMI), and back to ‘prioritized memory item', with multivariate inverted encoding modelling. Results showed that the representational format was remapped from its initially encoded format into a distinctive ‘opposite' representational format when it became a UMI and then mapped back into its initial format when subsequently prioritized in anticipation of its comparison with item n + 2. Thus, contrary to the default assumption that the activity representing an item in working memory might simply get weaker when it is deprioritized, it may be that a process of priority-based remapping helps to protect remembered information when it is not in the focus of attention.


1998 ◽  
Vol 9 (1) ◽  
pp. 66-70 ◽  
Author(s):  
J. Steven Reznick ◽  
J. J. Fueser ◽  
Michelle Bosquet

Infants watched an experimenter hide a toy in one of three wells and then attempted to retrieve it after a brief delay. Seven-month-olds performed at chance. Nine-month-olds reached correctly on 43% of trials, which is significantly better than chance. After an incorrect reach, infants were allowed to choose between the two remaining locations. Seven-month-olds responded at a chance level on their second reach, but 9-month-olds chose correctly more often than would be expected by chance despite a 10- to 20-s delay between hiding and search. One cause of error on the initial reach was a profound bias toward the center well. In Experiment 2, the wells were covered simultaneously, and the infant's spatial orientation was disrupted during the delay; this procedure eliminated the centripetal bias. Nine-month-olds still responded correctly more often than would be expected by chance on their second reach. These findings suggest that 9-month-olds sometimes have a more durable working memory for location than is generally reported for that age group.


Sign in / Sign up

Export Citation Format

Share Document