scholarly journals Conscious and unconscious memory differentially impact attention: Eye movements, visual search, and recognition processes

2019 ◽  
Author(s):  
Michelle Ramey ◽  
Andrew P. Yonelinas ◽  
John M. Henderson

A hotly debated question is whether memory influences attention through conscious or unconscious processes. To address this controversy, we measured eye movements while participants searched repeated real-world scenes for embedded targets, and we assessed memory for each scene using confidence-based methods to isolate different states of subjective memory awareness. We found that memory-informed eye movements during visual search were predicted both by conscious recollection, which led to a highly precise first eye movement toward the remembered location, and by unconscious memory, which increased search efficiency by gradually directing the eyes toward the target throughout the search trial. In contrast, these eye movement measures were not influenced by familiarity-based memory (i.e., changes in subjective reports of memory strength). The results indicate that conscious recollection and unconscious memory can each play distinct and complementary roles in guiding attention to facilitate efficient extraction of visual information.


2021 ◽  
Vol 12 ◽  
Author(s):  
Miles Tallon ◽  
Mark W. Greenlee ◽  
Ernst Wagner ◽  
Katrin Rakoczy ◽  
Ulrich Frick

The results of two experiments are analyzed to find out how artistic expertise influences visual search. Experiment I comprised survey data of 1,065 students on self-reported visual memory skills and their ability to find three targets in four images of artwork. Experiment II comprised eye movement data of 50 Visual Literacy (VL) experts and non-experts whose eye movements during visual search were analyzed for nine images of artwork as an external validation of the assessment tasks performed in Sample I. No time constraint was set for completion of the visual search task. A latent profile analysis revealed four typical solution patterns for the students in Sample I, including a mainstream group, a group that completes easy images fast and difficult images slowly, a fast and erroneous group, and a slow working student group, depending on task completion time and on the probability of finding all three targets. Eidetic memory, performance in art education and visual imagination as self-reported visual skills have significant impact on latent class membership probability. We present a hidden Markov model (HMM) approach to uncover underlying regions of attraction that result from visual search eye-movement behavior in Experiment II. VL experts and non-experts did not significantly differ in task time and number of targets found but they did differ in their visual search process: compared to non-experts, experts showed greater precision in fixating specific prime and target regions, assessed through hidden state fixation overlap. Exploratory analysis of HMMs revealed differences between experts and non-experts in image locations of attraction (HMM states). Experts seem to focus their attention on smaller image parts whereas non-experts used wider parts of the image during their search. Differences between experts and non-experts depend on the relative saliency of targets embedded in images. HMMs can determine the effect of expertise on exploratory eye movements executed during visual search tasks. Further research on HMMs and art expertise is required to confirm exploratory results.



2018 ◽  
Vol 21 (3) ◽  
pp. 45-36
Author(s):  
A. K. Volkov ◽  
V. V. Ionov

The X-ray screening systems operators’ professional training is based on the CBT (computer-based training) principle, which has algorithms of adaptive training. These algorithms in existing computer simulators include feedback mechanisms on the basis of trainability exponents – such as the frequency of detecting dangerous objects, the frequency of false alarms and detection time. Further enhancement of the operators’ simulator training effectiveness is associated with the integration of psychophysiological mechanisms providing monitoring of their functional state. Based on the analysis of the particularities of x-ray screening systems operators’ professional training associated with the formation of competences in dangerous objects visual search, the most perspective method is the Eye tracking technology. Domestic and foreign studies of the eye movements characteristics while solving professional tasks in training process are actively developed in various areas. There are no studies of visual search peculiarities in domestic practice in contrast to exterior studies. This research is aimed at considering the usage of Eye tracking technology in the training of x-ray screening systems operators. As the result of the experimental research with the use of mobile eye-tracker Sensomotoric Instruments Eye Tracking Glasses 2.0 the statistical data of eye movement parameters of two groups of subjects with different levels of training have been received. The application of cluster and discriminant analyses methods allowed to identify General classes of these parameters, as well as to obtain the discriminants functions for each group under examination. The theoretical significance of the peculiarities of the operators’ eye movement studies is to identify the patterns of prohibited items visual search. The practical importance of implementation of Eye tracking technology and statistical analysis methods is to increase the reliability of assessment the level of formed competence of x-ray screening systems’ operators in visual search, as well as to develop the potential system of operators’ state monitoring and assessing their visual fatigue.



2005 ◽  
Vol 15 (3) ◽  
pp. 149-160
Author(s):  
Jelte E. Bos ◽  
Jan van Erp ◽  
Eric L. Groen ◽  
Hendrik-Jan van Veen

This paper shows that tactile stimulation can override vestibular information regarding spinning sensations and eye movements. However, we conclude that the current data do not support the hypothesis that tactile stimulation controls eye movements directly. To this end, twenty-four subjects were passively disoriented by an abrupt stop after an increase in yaw velocity, about an Earth vertical axis, up to 120°/s. Immediately thereafter, they had to actively maintain a stationary position despite a disturbance signal. Subjects wore a tactile display vest with 48 miniature vibrators, applied in different combinations with visual and vestibular stimuli. Their performance was quantified by RMS body velocity during self-control. Fast eye movement phases were analyzed by counting samples exceeding a velocity limit, slow phases by a novel method applying a first order model. Without tactile and visual information, subjects returned to a previous level of angular motion. Tactile stimulation decreased RMS self velocity considerably, though less than vision. No differences were observed between conditions in which the vest was active during the recovery phase only or during the disorienting phase as well. All effects of tactile stimulation found on the eye movement parameters could be explained by the vestibular stimulus.



2003 ◽  
Vol 89 (6) ◽  
pp. 3340-3343 ◽  
Author(s):  
Neil G. Muggleton ◽  
Chi-Hung Juan ◽  
Alan Cowey ◽  
Vincent Walsh

Recent physiological recording studies in monkeys have suggested that the frontal eye fields (FEFs) are involved in visual scene analysis even when eye movement commands are not required. We examined this proposed function of the human frontal eye fields during performance of visual search tasks in which difficulty was matched and eye movements were neither necessary nor required. Magnetic stimulation over FEF modulated performance on a conjunction search task and a simple feature search task in which the target was unpredictable from trial to trial, primarily by increasing false alarm responses. Simple feature search with a predictable target was not affected. The results establish that human FEFs are critical to visual selection, regardless of the need to generate a saccade command.



PeerJ ◽  
2018 ◽  
Vol 6 ◽  
pp. e6038 ◽  
Author(s):  
Henry Railo ◽  
Henri Olkoniemi ◽  
Enni Eeronheimo ◽  
Oona Pääkkönen ◽  
Juho Joutsa ◽  
...  

Movement in Parkinson’s disease (PD) is fragmented, and the patients depend on visual information in their behavior. This suggests that the patients may have deficits in internally monitoring their own movements. Internal monitoring of movements is assumed to rely on corollary discharge signals that enable the brain to predict the sensory consequences of actions. We studied early-stage PD patients (N = 14), and age-matched healthy control participants (N = 14) to examine whether PD patients reveal deficits in updating their sensory representations after eye movements. The participants performed a double-saccade task where, in order to accurately fixate a second target, the participant must correct for the displacement caused by the first saccade. In line with previous reports, the patients had difficulties in fixating the second target when the eye movement was performed without visual guidance. Furthermore, the patients had difficulties in taking into account the error in the first saccade when making a saccade toward the second target, especially when eye movements were made toward the side with dominant motor symptoms. Across PD patients, the impairments in saccadic eye movements correlated with the integrity of the dopaminergic system as measured with [123I]FP-CIT SPECT: Patients with lower striatal (caudate, anterior putamen, and posterior putamen) dopamine transporter binding made larger errors in saccades. This effect was strongest when patients made memory-guided saccades toward the second target. Our results provide tentative evidence that the motor deficits in PD may be partly due to deficits in internal monitoring of movements.



2017 ◽  
Author(s):  
Hoppe David ◽  
Constantin A. Rothkopf

AbstractThe capability of directing gaze to relevant parts in the environment is crucial for our survival. Computational models based on ideal-observer theory have provided quantitative accounts of human gaze selection in a range of visual search tasks. According to these models, gaze is directed to the position in a visual scene, at which uncertainty about task relevant properties will be reduced maximally with the next look. However, in tasks going beyond a single action, delayed rewards can play a crucial role thereby necessitating planning. Here we investigate whether humans are capable of planning more than the next single eye movement. We found evidence that our subjects’ behavior was better explained by an ideal planner compared to the ideal observer. In particular, the location of the first fixation differed depending on the stimulus and the time available for the search. Overall, our results are the first evidence that our visual system is capable of planning.



2019 ◽  
Author(s):  
Michelle Marie Ramey ◽  
John M. Henderson ◽  
Andrew P. Yonelinas

The memories we form are determined by what we attend to, and conversely, what we attend to is influenced by our memory for past experiences. Although we know that shifts of attention via eye movements are related to memory during encoding and retrieval, the role of specific memory processes in this relationship is unclear. There is evidence that attention may be especially important for some forms of memory (i.e., conscious recollection), and less so for others (i.e., familiarity-based recognition and unconscious influences of memory), but results are conflicting with respect to both the memory processes and eye movement patterns involved. To address this, we used a confidence-based method of isolating eye movement indices of spatial attention that are related to different memory processes (i.e., recollection, familiarity strength, and unconscious memory) during encoding and retrieval of real-world scenes. We also developed a new method of measuring the dispersion of eye movements, which proved to be more sensitive to memory processing than previously used measures. Specifically, in two studies, we found that familiarity strength—that is, changes in subjective reports of memory confidence—increased with i) more dispersed patterns of viewing during encoding, ii) less dispersed viewing during retrieval, and iii) greater overlap in regions viewed between encoding and retrieval (i.e., resampling). Recollection was also related to these eye movements in a similar manner, though the associations with recollection were less consistent across experiments. Furthermore, we found no evidence for effects related to unconscious influences of memory. These findings indicate that attentional processes during viewing may not preferentially relate to recollection, and that the spatial distribution of eye movements is directly related to familiarity-based memory during encoding and retrieval.



2019 ◽  
Author(s):  
Yunhui Zhou ◽  
Yuguo Yu

AbstractHumans perform sequences of eye movements to search for a target in complex environment, but the efficiency of human search strategy is still controversial. Previous studies showed that humans can optimally integrate information across fixations and determine the next fixation location. However, their models ignored the temporal control of eye movement, ignored the limited human memory capacity, and the model prediction did not agree with details of human eye movement metrics well. Here, we measured the temporal course of human visibility map and recorded the eye movements of human subjects performing a visual search task. We further built a continuous-time eye movement model which considered saccadic inaccuracy, saccadic bias, and memory constraints in the visual system. This model agreed with many spatial and temporal properties of human eye movements, and showed several similar statistical dependencies between successive eye movements. In addition, our model also predicted that the human saccade decision is shaped by a memory capacity of around 8 recent fixations. These results suggest that human visual search strategy is not strictly optimal in the sense of fully utilizing the visibility map, but instead tries to balance between search performance and the costs to perform the task.Author SummaryDuring visual search, how do humans determine when and where to make eye movement is an important unsolved issue. Previous studies suggested that human can optimally use the visibility map to determine fixation locations, but we found that such model didn’t agree with details of human eye movement metrics because it ignored several realistic biological limitations of human brain functions, and couldn’t explain the temporal control of eye movements. Instead, we showed that considering the temporal course of visual processing and several constrains of the visual system could greatly improve the prediction on the spatiotemporal properties of human eye movement while only slightly affected the search performance in terms of median fixation numbers. Therefore, humans may not use the visibility map in a strictly optimal sense, but tried to balance between search performance and the costs to perform the task.



2021 ◽  
Author(s):  
Roger Johansson ◽  
Marcus Nyström ◽  
Richard Dewhurst ◽  
Mikael Johansson

Abstract When we bring to mind something we have seen before, our eyes spontaneously reproduce a pattern strikingly similar to that made during the original encounter. Eye-movements can then serve the opposite purpose to acquiring new visual information; they can serve as self-generated cues, pointing to memories already stored. By isolating separable properties within the closely bound chain of where and when we look, we demonstrate that specific components of dynamically reinstated eye-movement sequences, facilitate different aspects of episodic remembering. We also show that the fidelity with which a series of connected eye-movements from initial encoding is reproduced during subsequent retrieval, predicts the quality of the recalled memory. Our findings indicate that eye movements are “replayed” to assemble visuospatial relations as we remember. Distinct dimensions of these scanpaths differentially contribute depending on the goal-relevant memory.



2018 ◽  
Author(s):  
Henry Railo ◽  
Henri Olkoniemi ◽  
Enni Eeronheimo ◽  
Oona Pääkkonen ◽  
Juho Joutsa ◽  
...  

Movement in Parkinson’s disease (PD) is fragmented, and the patients depend on visual information in their behavior. This suggests that the patients may have deficits in internally monitoring their own movements. Internal monitoring of movements is assumed to rely on corollary discharge signals that enable the brain to predict the sensory consequences of actions. We studied early-stage PD patients (N=14), and age-matched healthy control participants (N=14) to examine whether PD patients reveal deficits in updating their sensory representations after eye movements. The participants performed a double-saccade task where, in order to accurately fixate a second target, the participant must correct for the displacement caused by the first saccade. In line with previous reports, the patients had difficulties in fixating the second target when the eye movement was performed without visual guidance. Furthermore, the patients had difficulties in taking into account the error in the first saccade when making a saccade towards the second target, especially when eye movements were made towards the side with dominant motor symptoms. Across PD patients, the impairments in saccadic eye movements correlated with the integrity of the dopaminergic system as measured with [123I]FP-CIT SPECT: Patients with lower striatal (caudate, anterior putamen and posterior putamen) dopamine transporter binding made larger errors in saccades. This effect was strongest when patients made memory-guided saccades towards the second target. Our results provide tentative evidence that the motor deficits in PD may be partly accounted by deficits in internal monitoring of movements.



Sign in / Sign up

Export Citation Format

Share Document