Evaluating the human ongoing visual search performance by eye tracking application and sequencing tests

2012 ◽  
Vol 107 (3) ◽  
pp. 468-477 ◽  
Author(s):  
Giacomo Veneri ◽  
Elena Pretegiani ◽  
Francesca Rosini ◽  
Pamela Federighi ◽  
Antonio Federico ◽  
...  
Author(s):  
Kaifeng Liu ◽  
Calvin Ka-lun Or

This is an eye-tracking study examining the effects of image segmentation and target number on visual search performance. A two-way repeated-measures computer-based visual search test was used for data collection. Thirty students participated in the test, in which they were asked to search for all of the Landolt Cs in 80 arrays of closed rings. The dependent variables were search time, accuracy, fixation count, and average fixation duration. Our principal findings were that some of the segmentation methods significantly improved accuracy, and reduced search time, fixation count, and average fixation duration, compared with the no-segmentation condition. Increased target number was found to be associated with longer search time, lower accuracy, more fixations, and longer average fixation duration. Our study indicates that although visual search tasks with multiple targets are relatively difficult, the visual search accuracy and efficiency can potentially be improved with the aid of image segmentation.


Author(s):  
Laura E. Matzen ◽  
Mallory C. Stites ◽  
Zoe. N. Gastelum

AbstractEye tracking is a useful tool for studying human cognition, both in the laboratory and in real-world applications. However, there are cases in which eye tracking is not possible, such as in high-security environments where recording devices cannot be introduced. After facing this challenge in our own work, we sought to test the effectiveness of using artificial foveation as an alternative to eye tracking for studying visual search performance. Two groups of participants completed the same list comparison task, which was a computer-based task designed to mimic an inventory verification process that is commonly performed by international nuclear safeguards inspectors. We manipulated the way in which the items on the inventory list were ordered and color coded. For the eye tracking group, an eye tracker was used to assess the order in which participants viewed the items and the number of fixations per trial in each list condition. For the artificial foveation group, the items were covered with a blurry mask except when participants moused over them. We tracked the order in which participants viewed the items by moving their mouse and the number of items viewed per trial in each list condition. We observed the same overall pattern of performance for the various list display conditions, regardless of the method. However, participants were much slower to complete the task when using artificial foveation and had more variability in their accuracy. Our results indicate that the artificial foveation method can reveal the same pattern of differences across conditions as eye tracking, but it can also impact participants’ task performance.


2018 ◽  
Author(s):  
Joram van Driel ◽  
Eduard Ort ◽  
Johannes J. Fahrenfort ◽  
Christian N. L. Olivers

AbstractMany important situations require human observers to simultaneously search for more than one object. Despite a long history of research into visual search, the behavioral and neural mechanisms associated with multiple-target search are poorly understood. Here we test the novel theory that the efficiency of looking for multiple targets critically depends on the mode of cognitive control the environment affords to the observer. We used an innovative combination of EEG and eye tracking while participants searched for two targets, within two different contexts: Either both targets were present in the search display and observers were free to prioritize either one of them, thus enabling proactive control over selection; or only one of the two targets would be present in each search display, which requires reactive control to reconfigure selection when the wrong target is prioritized. During proactive control, both univariate and multivariate signals of beta-band (15–35 Hz) power suppression prior to display onset predicted switches between target selections. This signal originated over midfrontal and sensorimotor regions and has previously been associated with endogenous state changes. In contrast, imposed target selections requiring reactive control elicited prefrontal power enhancements in the delta/theta-band (2–8 Hz), but only after display onset. This signal predicted individual differences in associated oculomotor switch costs, reflecting reactive reconfiguration of target selection. The results provide compelling evidence that multiple target representations are differentially prioritized during visual search, and for the first time reveal distinct neural mechanisms underlying proactive and reactive control over multiple-target search.Significance StatementSearching for more than one object in complex visual scenes can be detrimental for search performance. While perhaps annoying in daily life, this can have severe consequences in professional settings such as medical and security screening. Previous research has not yet resolved whether multiple-target search involves changing priorities in what people attend to, and how such changes are controlled. We approached these questions by concurrently measuring cortical activity and eye movements using EEG and eye tracking, while observers searched for multiple possible targets. Our findings provide the first unequivocal support for the existence of two modes of control during multiple-target search, which are expressed in qualitatively distinct time-frequency signatures of the EEG both before and after visual selection.


2015 ◽  
Vol 74 (1) ◽  
pp. 55-60 ◽  
Author(s):  
Alexandre Coutté ◽  
Gérard Olivier ◽  
Sylvane Faure

Computer use generally requires manual interaction with human-computer interfaces. In this experiment, we studied the influence of manual response preparation on co-occurring shifts of attention to information on a computer screen. The participants were to carry out a visual search task on a computer screen while simultaneously preparing to reach for either a proximal or distal switch on a horizontal device, with either their right or left hand. The response properties were not predictive of the target’s spatial position. The results mainly showed that the preparation of a manual response influenced visual search: (1) The visual target whose location was congruent with the goal of the prepared response was found faster; (2) the visual target whose location was congruent with the laterality of the response hand was found faster; (3) these effects have a cumulative influence on visual search performance; (4) the magnitude of the influence of the response goal on visual search is marginally negatively correlated with the rapidity of response execution. These results are discussed in the general framework of structural coupling between perception and motor planning.


Author(s):  
Gwendolyn Rehrig ◽  
Reese A. Cullimore ◽  
John M. Henderson ◽  
Fernanda Ferreira

Abstract According to the Gricean Maxim of Quantity, speakers provide the amount of information listeners require to correctly interpret an utterance, and no more (Grice in Logic and conversation, 1975). However, speakers do tend to violate the Maxim of Quantity often, especially when the redundant information improves reference precision (Degen et al. in Psychol Rev 127(4):591–621, 2020). Redundant (non-contrastive) information may facilitate real-world search if it narrows the spatial scope under consideration, or improves target template specificity. The current study investigated whether non-contrastive modifiers that improve reference precision facilitate visual search in real-world scenes. In two visual search experiments, we compared search performance when perceptually relevant, but non-contrastive modifiers were included in the search instruction. Participants (NExp. 1 = 48, NExp. 2 = 48) searched for a unique target object following a search instruction that contained either no modifier, a location modifier (Experiment 1: on the top left, Experiment 2: on the shelf), or a color modifier (the black lamp). In Experiment 1 only, the target was located faster when the verbal instruction included either modifier, and there was an overall benefit of color modifiers in a combined analysis for scenes and conditions common to both experiments. The results suggest that violations of the Maxim of Quantity can facilitate search when the violations include task-relevant information that either augments the target template or constrains the search space, and when at least one modifier provides a highly reliable cue. Consistent with Degen et al. (2020), we conclude that listeners benefit from non-contrastive information that improves reference precision, and engage in rational reference comprehension. Significance statement This study investigated whether providing more information than someone needs to find an object in a photograph helps them to find that object more easily, even though it means they need to interpret a more complicated sentence. Before searching a scene, participants were either given information about where the object would be located in the scene, what color the object was, or were only told what object to search for. The results showed that providing additional information helped participants locate an object in an image more easily only when at least one piece of information communicated what part of the scene the object was in, which suggests that more information can be beneficial as long as that information is specific and helps the recipient achieve a goal. We conclude that people will pay attention to redundant information when it supports their task. In practice, our results suggest that instructions in other contexts (e.g., real-world navigation, using a smartphone app, prescription instructions, etc.) can benefit from the inclusion of what appears to be redundant information.


Ergonomics ◽  
1992 ◽  
Vol 35 (3) ◽  
pp. 243-252 ◽  
Author(s):  
DOHYUNG KEE ◽  
EUI S. JUNG ◽  
MIN K. CHUNG

Sign in / Sign up

Export Citation Format

Share Document