visual cue
Recently Published Documents


TOTAL DOCUMENTS

316
(FIVE YEARS 75)

H-INDEX

28
(FIVE YEARS 2)

2022 ◽  
Author(s):  
Allison T Goldstein ◽  
Terrence R Stanford ◽  
Emilio Salinas

Oculomotor circuits generate eye movements based on the physical salience of objects and current behavioral goals, exogenous and endogenous influences, respectively. However, the interactions between exogenous and endogenous mechanisms and their dynamic contributions to target selection have been difficult to resolve because they evolve extremely rapidly. In a recent study (Salinas et al., 2019), we achieved the necessary temporal precision using an urgent variant of the antisaccade task wherein motor plans are initiated early and choice accuracy depends sharply on when exactly the visual cue information becomes available. Empirical and modeling results indicated that the exogenous signal arrives ~80 ms after cue onset and rapidly accelerates the (incorrect) plan toward the cue, whereas the informed endogenous signal arrives ~25 ms later to favor the (correct) plan away from the cue. Here, we scrutinize a key mechanistic hypothesis about this dynamic, that the exogenous and endogenous signals act at different times and independently of each other. We test quantitative model predictions by comparing the performance of human participants instructed to look toward a visual cue versus away from it under high urgency. We find that, indeed, the exogenous response is largely impervious to task instructions; it simply flips its sign relative to the correct choice, and this largely explains the drastic differences in psychometric performance between the two tasks. Thus, saccadic choices are strongly dictated by the alignment between salience and behavioral goals.


Author(s):  
Joong Kee Youn ◽  
Dongheon Lee ◽  
Dayoung Ko ◽  
Inhwa Yeom ◽  
Hyun-Jin Joo ◽  
...  

Author(s):  
Jasper de Waard ◽  
Louisa Bogaerts ◽  
Dirk van Moorselaar ◽  
Jan Theeuwes

AbstractThe present study investigates the flexibility of statistically learned distractor suppression between different contexts. Participants performed the additional singleton task searching for a unique shape, while ignoring a uniquely colored distractor. Crucially, we created two contexts within the experiments, and each context was assigned its own high-probability distractor location, so that the location where the distractor was most likely to appear depended on the context. Experiment 1 signified context through the color of the background. In Experiment 2, we aimed to more strongly differentiate between the contexts using an auditory or visual cue to indicate the upcoming context. In Experiment 3, context determined the appropriate response ensuring that participants engaged the context in order to be able to perform the task. Across all experiments, participants learned to suppress both high-probability locations, even if they were not aware of these spatial regularities. However, these suppression effects occurred independent of context, as the pattern of suppression reflected a de-prioritization of both high-probability locations which did not change with the context. We employed Bayesian analyses to statistically quantify the absence of context-dependent suppression effects. We conclude that statistically learned distractor suppression is robust and generalizes across contexts.


2021 ◽  
Author(s):  
Carolyn Murray ◽  
Maisy Tarlow ◽  
Jesse Rissman ◽  
Ladan Shams

Associating names to faces can be challenging, but it is an important task that we engage in throughout our lives. An interesting feature of this task is the lack of an inherent, semantic relationship between a face and name. Previous scientific research, as well as common lay theories, offer strategies that can aid in this task (e.g., mnemonics, semantic associations). However, these strategies are either impractical (e.g., spaced repetition) or cumbersome (e.g., mnemonics). The current study seeks to understand whether bolstering names with cross-modal cues—specifically, name tags—may aid memory for face and name pairings. In a series of five experiments, we investigated whether the presentation of congruent auditory (vocal) and written names at encoding might benefit subsequent cued recall and recognition memory tasks. The first experiment consisted of short video clips of individuals verbally introducing themselves (auditory cue), presented with or without a name tag (visual cue). The results showed that participants, cued with a picture of a face, were more likely to recall the associated name when those names were encoded with a name tag (i.e. a congruent visual cue) compared to when no supporting cross-modal cue was available. Subsequent experiments probed the underlying mechanism for this facilitation of memory. The findings were consistent with a benefit of multisensory encoding, above and beyond any effect from the availability of multiple independent unisensory traces. Overall, these results extend previous findings of a benefit of multisensory encoding in learning and memory, to a naturalistic associative memory task.


2021 ◽  
pp. 174702182110590
Author(s):  
Alper Kumcu ◽  
Robin L. Thompson

Previous evidence shows that words with implicit spatial meaning or metaphorical spatial associations are perceptually simulated and can guide attention to associated locations (e.g., bird – upward location). In turn, simulated representations interfere with visual perception at an associated location. The present study investigates the effect of spatial associations on short-term verbal recognition memory to disambiguate between modal and amodal accounts of spatial interference effects across two experiments. Participants in both experiments encoded words presented in congruent and incongruent locations. Congruent and incongruent locations were based on an independent norming task. In Experiment 1, an auditorily presented word probed participants’ memory as they were visually cued to either the original location of the probe word or a diagonal location at retrieval. In Experiment 2, there was no cue at retrieval but a neutral encoding condition in which words normed to central locations were shown. Results show that spatial associations affected memory performance although spatial information was neither relevant nor necessary for successful retrieval: Words in Experiment 1 were retrieved more accurately when there was a visual cue in the congruent location at retrieval but only if they were encoded in a non-canonical position. A visual cue in the congruent location slowed down memory performance when retrieving highly imageable words. With no cue at retrieval (Experiment 2), participants were better at remembering spatially congruent words as opposed to neutral words. Results provide evidence in support of sensorimotor simulation in verbal memory and a perceptual competition account of spatial interference effect.


eLife ◽  
2021 ◽  
Vol 10 ◽  
Author(s):  
Renée S Koolschijn ◽  
Anna Shpektor ◽  
William T Clarke ◽  
I Betina Ip ◽  
David Dupret ◽  
...  

The brain has a remarkable capacity to acquire and store memories that can later be selectively recalled. These processes are supported by the hippocampus which is thought to index memory recall by reinstating information stored across distributed neocortical circuits. However, the mechanism that supports this interaction remains unclear. Here, in humans, we show that recall of a visual cue from a paired associate is accompanied by a transient increase in the ratio between glutamate and GABA in visual cortex. Moreover, these excitatory-inhibitory fluctuations are predicted by activity in the hippocampus. These data suggest the hippocampus gates memory recall by indexing information stored across neocortical circuits using a disinhibitory mechanism.


2021 ◽  
pp. 1-15
Author(s):  
Kim McDonough ◽  
Rachael Lindberg ◽  
Pavel Trofimovich ◽  
Oguzhan Tekin

Abstract This replication study seeks to extend the generalizability of an exploratory study (McDonough et al., 2019) that identified holds (i.e., temporary cessation of dynamic movement by the listener) as a reliable visual cue of non-understanding. Conversations between second language (L2) English speakers in the Corpus of English as a Lingua Franca Interaction (CELFI; McDonough & Trofimovich, 2019) with non-understanding episodes (e.g., pardon?, what?, sorry?) were sampled and compared with understanding episodes (i.e., follow-up questions). External raters (N = 90) assessed the listener's comprehension under three rating conditions: +face/+voice, −face/+voice, and +face/−voice. The association between non-understanding and holds in McDonough et al. (2019) was confirmed. Although raters distinguished reliably between understanding and non-understanding episodes, they were not sensitive to facial expressions when judging listener comprehension. The initial and replication findings suggest that holds remain a promising visual signature of non-understanding that can be explored in future theoretically- and pedagogically-oriented contexts.


Sign in / Sign up

Export Citation Format

Share Document