screen location
Recently Published Documents


TOTAL DOCUMENTS

11
(FIVE YEARS 4)

H-INDEX

2
(FIVE YEARS 2)

2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Eitan Schechtman ◽  
James W. Antony ◽  
Anna Lampe ◽  
Brianna J. Wilson ◽  
Kenneth A. Norman ◽  
...  

AbstractMemory consolidation involves the reactivation of memory traces during sleep. If different memories are reactivated each night, how much do they interfere with one another? We examined whether reactivating multiple memories incurs a cost to sleep-related benefits by contrasting reactivation of multiple memories versus single memories during sleep. First, participants learned the on-screen location of different objects. Each object was part of a semantically coherent group comprised of either one, two, or six items (e.g., six different cats). During sleep, sounds were unobtrusively presented to reactivate memories for half of the groups (e.g., “meow”). Memory benefits for cued versus non-cued items were independent of the number of items in the group, suggesting that reactivation occurs in a simultaneous and promiscuous manner. Intriguingly, sleep spindles and delta-theta power modulations were sensitive to group size, reflecting the extent of previous learning. Our results demonstrate that multiple memories may be consolidated in parallel without compromising each memory’s sleep-related benefit. These findings highlight alternative models for parallel consolidation that should be considered in future studies.


2019 ◽  
Author(s):  
Eitan Schechtman ◽  
James W. Antony ◽  
Anna Lampe ◽  
Brianna J. Wilson ◽  
Kenneth A. Norman ◽  
...  

AbstractMemory consolidation involves the reactivation of memory traces during sleep. If many memories are reactivated each night, how much do they interfere with one another? To explore this question, we examined whether reactivating multiple memories incurs a cost to sleep-related benefits by contrasting reactivation of multiple memories versus single memories during sleep. First, participants learned the on-screen location of different images. Each image was part of a semantically interconnected group (e.g., images of different cats). Groups were comprised of either one, two, or six images. During sleep, group-related sounds (e.g., “meow”) were unobtrusively presented to reactivate memories for half of the groups. The benefit in location recall for cued versus non-cued items was independent of the number of items in the group, suggesting that reactivation occurs in a simultaneous, promiscuous manner. Intriguingly, sleep spindles and delta-theta power modulations were sensitive to group size and reflected the extent of previous learning. Our results demonstrate that multiple memories may be consolidated in parallel without compromising each memory’s sleep-related benefit, suggesting that the brain’s capacity for reactivation is not strictly limited by separate resources needed for individual memories. These findings highlight alternative models for parallel consolidation that should be considered in future studies.


2017 ◽  
Author(s):  
Andrei Gorea ◽  
Lionel Granjon ◽  
Dov Sagi

ABSTRACTAre we aware of the outcome of our actions? The participants pointed rapidly at a screen location marked by a transient visual target (T), with and without seeing their hand, and were asked to estimate (E) their landing location (L) using the same finger but without time constraints. We found that L and E are systematically and idiosyncratically shifted away from their corresponding targets (T, L), suggesting unawareness. Moreover, E was biased away from L, toward T (21% and 37%, with and without visual feedback), in line with a putative Bayesian account of the results, assuming a strong prior in the absence of vision. However, L (the assumed prior) and E (the assumed posterior) precisions were practically identical, arguing against such an account of the results. Instead, the results are well accounted for by a simple model positing that the participants’ E is set to the planned rather than the actual L. When asked to estimate their landing location, participants appeared to reenact their original motor plan.


PeerJ ◽  
2016 ◽  
Vol 4 ◽  
pp. e2241 ◽  
Author(s):  
Christopher J. Luke ◽  
Petra M.J. Pollux

Eye tracking has been used during face categorisation and identification tasks to identify perceptually salient facial features and infer underlying cognitive processes. However, viewing patterns are influenced by a variety of gaze biases, drawing fixations to the centre of a screen and horizontally to the left side of face images (left-gaze bias). In order to investigate potential interactions between gaze biases uniquely associated with facial expression processing, and those associated with screen location, face stimuli were presented in three possible screen positions to the left, right and centre. Comparisons of fixations between screen locations highlight a significant impact of the screen centre bias, pulling fixations towards the centre of the screen and modifying gaze biases generally observed during facial categorisation tasks. A left horizontal bias for fixations was found to be independent of screen position but interacting with screen centre bias, drawing fixations to the left hemi-face rather than just to the left of the screen. Implications for eye tracking studies utilising centrally presented faces are discussed.


2016 ◽  
Author(s):  
Christopher J Luke ◽  
Petra M J Pollux

Eye tracking has been used during face categorisation and identification tasks to identify perceptually salient facial features and infer underlying cognitive processes. However, viewing patterns are influenced by a variety of gaze biases, drawing fixations to the centre of a screen and horizontally to the left side of face images (left-gaze bias). In order to investigate potential interactions between gaze biases uniquely associated with facial expression processing, and those associated with screen location, face stimuli were presented in three possible screen positions to the left, right and centre. Comparisons of fixations between screen locations highlight a significant impact of the screen centre bias, pulling fixations towards the centre of the screen and modifying gaze biases generally observed during facial categorisation tasks. A left horizontal bias for fixations was found to be independent of screen position but interacting with screen centre bias, drawing fixations to the left hemi-face rather than just to the left of the screen. Implications for eye tracking studies utilising centrally presented faces are discussed.


2016 ◽  
Author(s):  
Christopher J Luke ◽  
Petra M J Pollux

Eye tracking has been used during face categorisation and identification tasks to identify perceptually salient facial features and infer underlying cognitive processes. However, viewing patterns are influenced by a variety of gaze biases, drawing fixations to the centre of a screen and horizontally to the left side of face images (left-gaze bias). In order to investigate potential interactions between gaze biases uniquely associated with facial expression processing, and those associated with screen location, face stimuli were presented in three possible screen positions to the left, right and centre. Comparisons of fixations between screen locations highlight a significant impact of the screen centre bias, pulling fixations towards the centre of the screen and modifying gaze biases generally observed during facial categorisation tasks. A left horizontal bias for fixations was found to be independent of screen position but interacting with screen centre bias, drawing fixations to the left hemi-face rather than just to the left of the screen. Implications for eye tracking studies utilising centrally presented faces are discussed.


Ergonomics ◽  
2004 ◽  
Vol 47 (8) ◽  
pp. 907-921 ◽  
Author(s):  
Jonathan Ling ◽  
Paul van Schaik

Sign in / Sign up

Export Citation Format

Share Document