fixation locations
Recently Published Documents


TOTAL DOCUMENTS

40
(FIVE YEARS 12)

H-INDEX

13
(FIVE YEARS 1)

2022 ◽  
Author(s):  
Anke Cajar ◽  
Ralf Engbert ◽  
Jochen Laubrock

The availability of large eye-movement corpora has become increasingly important over the past years. In scene viewing, scan-path analyses of time-ordered fixations, for example, allow for investigating individual differences in spatial correlations between fixation locations, or for predicting individual viewing behavior in the context of computational models. However, time-dependent analyses require many fixations per scene, and only few large eye-movement corpora are publicly available. This manuscript presents a new corpus with eye-movement data from two hundred participants. Viewers memorized or searched either color or grayscale scenes while high or low spatial frequencies were filtered in central or peripheral vision. Our database provides the scenes from the experiment with corresponding object annotations, preprocessed eye-movement data, and heatmaps and fixation clusters based on empirical fixation locations. Besides time-dependent analyses, the corpus data allow for investigating questions that have received little attention in scene-viewing research so far: (i) eye-movement behavior under different task instructions, (ii) the importance of color and spatial frequencies when performing these tasks, and (iii) the individual roles and interaction of central and peripheral vision during scene viewing. Furthermore, the corpus allows for validation of computational models of attention and eye-movement control, and finally, analyses on an object- or cluster-based level.


2021 ◽  
pp. 1-26
Author(s):  
James E. Cutting

Abstract Popular movies are constructed to control our attention and guide our eye movements across the screen. Estimates of fixation locations were made by manually moving a cursor and clicking over frames at the beginnings and ends of more than 30,000 shots in 24 English-language movies. Results provide evidence for three general filmmaking practices in screen composition. The first and overriding practice is that filmmakers generally put the most import content ‒ usually the center of a character’s face ‒ slightly above the center of the screen. The second concerns two-person conversations, which account for about half of popular movie content. Dialogue shots alternate views of the speakers involved, and filmmakers generally place the conversants slightly to opposite sides of the midline. The third concerns all other shots. For those, filmmakers generally follow important content in one shot by similar content in the next shot on the same side of the vertical midline. The horizontal aspect of the first practice seems to follow from the nature of our field of view and vertical aspect from the relationship of heads to bodies depicted. The second practice derives from social norms and an image composition norm called nose room, and the third from the consideration of continuity and the speed of re-engaging attention.


2021 ◽  
Author(s):  
Jinbiao Yang ◽  
Antal van den Bosch ◽  
Stefan L. Frank

Words typically form the basis of psycholinguistic and computational linguistic studies about sentence processing. However, recent evidence shows the basic units during reading, i.e., the items in the mental lexicon, are not always words, but could also be sub-word and supra-word units. To recognize these units, human readers require a cognitive mechanism to learn and detect them. In this paper, we assume eye fixations during reading reveal the locations of the cognitive units, and that the cognitive units are analogous with the text units discovered by unsupervised segmentation models. We predict eye fixations by model-segmented units on both English and Dutch text. The results show the model-segmented units predict eye fixations better than word units. This finding suggests that the predictive performance of model-segmented units indicates their plausibility as cognitive units. The Less-is-Better (LiB) model, which finds the units that minimize both long-term and working memory load, offers advantages both in terms of prediction score and efficiency among alternative models. Our results also suggest that modeling the least-effort principle on the management of long-term and working memory can lead to inferring cognitive units. Overall, the study supports the theory that the mental lexicon stores not only words but also smaller and larger units, suggests that fixation locations during reading depend on these units, and shows that unsupervised segmentation models can discover these units.


Leonardo ◽  
2021 ◽  
pp. 1-10
Author(s):  
Eugene Han

Abstract In the following study, the author developed a method for representing data from eye-tracking recordings. The study proposed a form of graphical analysis that illustrates hierarchical densities of visual regard without obscuring the original pictorial stimulus. Across three different case studies, subjects’ fixation patterns were used to propagate Voronoi generating points. Integrating both fixation locations and their respective dwell times, randomized Gaussian distribution provided a technique to augment Voronoi generating seeds and enhance graphical resolution. Color pixel values were then used to fill in resultant Voronoi cells, in relation to color values provided by the original stimulus. The study revealed a form of analysis that allowed for effective differentiation of viewing behaviors between different subjects, in which emphasis was placed on a subject's attentional distribution rather than on graphic icons.


Author(s):  
Valeria C Caruso ◽  
Daniel S Pages ◽  
Marc A. Sommer ◽  
Jennifer M Groh

Stimulus locations are detected differently by different sensory systems, but ultimately they yield similar percepts and behavioral responses. How the brain transcends initial differences to compute similar codes is unclear. We quantitatively compared the reference frames of two sensory modalities, vision and audition, across three interconnected brain areas involved in generating saccades, namely the frontal eye fields (FEF), lateral and medial parietal cortex (M/LIP), and superior colliculus (SC). We recorded from single neurons in head-restrained monkeys performing auditory- and visually-guided saccades from variable initial fixation locations, and evaluated whether their receptive fields were better described as eye-centered, head-centered, or hybrid (i.e. not anchored uniquely to head- or eye-orientation). We found a progression of reference frames across areas and across time, with considerable hybrid-ness and persistent differences between modalities during most epochs/brain regions. For both modalities, the SC was more eye-centered than the FEF, which in turn was more eye-centered than the predominantly hybrid M/LIP. In all three areas and temporal epochs from stimulus onset to movement, visual signals were more eye-centered than auditory signals. In the SC and FEF, auditory signals became more eye-centered at the time of the saccade than they were initially after stimulus onset, but only in the SC at the time of the saccade did the auditory signals become predominantly eye-centered. The results indicate that visual and auditory signals both undergo transformations, ultimately reaching the same final reference frame but via different dynamics across brain regions and time.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Yuyan Liu ◽  
Ying Wang ◽  
Yi Dong ◽  
Dongqing Liang ◽  
Shiyong Xie ◽  
...  

AbstractTo analyze the relationships between the fixation location and the visual function of idiopathic macular hole (IMH) patients with macular integrity assessment (MAIA) examination preoperatively and 3 months postoperatively. This was a retrospective case analysis. Forty-three eyes of 43 patients diagnosed with IMH were included in this study. The best corrected visual acuity (BCVA) assessments, optical coherence tomography (OCT) and MAIA examinations were performed before surgery and 1 week, 1 month and 3 months after surgery. The relationships between MAIA parameters and visual acuity were assessed by correlation analysis. Grouping by fixation location with the foveola (2°) as the centre, the locations could be divided into five groups, including foveolar, temporal, nasal, inferior and superior fixation. The mean macular sensitivity (MMS) of the macular area was correlated with the BCVA in the IMH patients before and 3 months after surgery (before surgery P = 0.00, after surgery P = 0.00). The MMS could be used as a good indicator for evaluating visual function in IMH patients. There was a significant difference in fixation location before and after the operation (P = 0.01). The preoperative fixation location of IMH patients was mainly in the superior area, while postoperatively moved to the foveola and nasal areas. Paying attention to the changes of fixation locations in IMH patients may provide new clues for further improving postoperative visual function.


2020 ◽  
Vol 13 (2) ◽  
Author(s):  
Nino Sharvashidze ◽  
Alexander C Schütz

In art schools and classes for art history students are trained to pay attention to different aspects of an artwork, such as art movement characteristics and painting techniques. Experts are better at processing style and visual features of an artwork than nonprofessionals. Here we tested the hypothesis that experts in art use different, task-dependent viewing strategies than nonprofessionals when analyzing a piece of art. We compared a group of art history students with a group of students with no art education background, while viewing 36 paintings under three discrimination tasks. Participants were asked to determine the art movement, the date and the medium of the paintings. We analyzed behavioral and eye-movement data of 27 participants. Our observers adjusted their viewing strategies according to the task, resulting in longer fixation durations and shorter saccade amplitudes for the medium detection task. We found higher task accuracy and subjective confidence, less congruence and higher dispersion in fixation locations in experts. Expertise also influenced saccade metrics, biasing it towards larger saccade amplitudes, advocating a more holistic scanning strategy of experts in all three tasks.


Author(s):  
Jork Stapel ◽  
Mounir El Hassnaoui ◽  
Riender Happee

Objective To investigate how well gaze behavior can indicate driver awareness of individual road users when related to the vehicle’s road scene perception. Background An appropriate method is required to identify how driver gaze reveals awareness of other road users. Method We developed a recognition-based method for labeling of driver situation awareness (SA) in a vehicle with road-scene perception and eye tracking. Thirteen drivers performed 91 left turns on complex urban intersections and identified images of encountered road users among distractor images. Results Drivers fixated within 2° for 72.8% of relevant and 27.8% of irrelevant road users and were able to recognize 36.1% of the relevant and 19.4% of irrelevant road users one min after leaving the intersection. Gaze behavior could predict road user relevance but not the outcome of the recognition task. Unexpectedly, 18% of road users observed beyond 10° were recognized. Conclusions Despite suboptimal psychometric properties leading to low recognition rates, our recognition task could identify awareness of individual road users during left turn maneuvers. Perception occurred at gaze angles well beyond 2°, which means that fixation locations are insufficient for awareness monitoring. Application Findings can be used in driver attention and awareness modelling, and design of gaze-based driver support systems.


2020 ◽  
Author(s):  
Šimon Kucharský ◽  
Daan Roelof van Renswoude ◽  
Maartje Eusebia Josefa Raijmakers ◽  
Ingmar Visser

Describing, analyzing and explaining patterns in eye movement behavior is crucial for understanding visual perception. Further, eye movements are increasingly used in informing cognitive process models. In this article, we start by reviewing basic characteristics and desiderata for models of eye movements. Specifically, we argue that there is a need for models combining spatial and temporal aspects of eye-tracking data (i.e., fixation durations and fixation locations), that formal models derived from concrete theoretical assumptions are needed to inform our empirical research, and custom statistical models are useful for detecting specific empirical phenomena that are to be explained by said theory. In this article, we develop a conceptual model of eye movements, or specifically, fixation durations and fixation locations, and from it derive a formal statistical model --- meeting our goal of crafting a model useful in both the theoretical and empirical research cycle. We demonstrate the use of the model on an example of infant natural scene viewing, to show that the model is able to explain different features of the eye movement data, and to showcase how to identify that the model needs to be adapted if it does not agree with the data. We conclude with discussion of potential future avenues for formal eye movement models.


2020 ◽  
Vol 37 (3) ◽  
pp. 259-269 ◽  
Author(s):  
Kristian Pentus ◽  
Kerli Ploom ◽  
Tanel Mehine ◽  
Madli Koiv ◽  
Age Tempel ◽  
...  

Purpose This paper aims to test the similarity of the results of on-screen eye tracking compared to mobile eye tracking in the context of first fixation location on stimuli. Design/methodology/approach Three studies were conducted altogether with 117 participants, where the authors compared both methods: stationary eye tracking (Tobii Pro X2-60) and mobile eye tracking (Tobii Pro Glasses 2). Findings The studies revealed that the reported average first fixation locations from stationary and mobile eye tracking are different. Stationary eye tracking is more affected by a centre fixation bias. Based on the research, it can be concluded that stationary eye tracking is not always suitable for studying consumer perception and behaviour because of the centre viewing bias. Research limitations/implications When interpreting the results, researchers should take into account that stationary eye tracking results are affected by a centre fixation bias. Previous stationary eye tracking research should be interpreted with the centre fixation bias in mind. Some of this previous work should be retested using mobile eye tracking. If possible small-scale pilot studies should be included in papers to show that the more appropriate method, less affected by attention biases, was chosen. Practical implications Managers should trust research where the ability of package design to attract attention on a shelf is tested using mobile eye tracking. The authors suggest using mobile eye tracking to optimise store shelf planograms, point-of-purchase materials, and shelf layouts. In package design, interpretations of research using stationary eye tracking should consider its centre fixation bias. Managers should also be cautious when interpreting previous stationary eye tracking research (both applied and scientific), knowing that stationary eye tracking is more prone to a centre fixation bias. Originality/value While eye tracking research has become more and more popular as a marketing research method, the limitations of the method have not been fully understood by the field. This paper shows that the chosen eye tracking method can influence the results. No such comparative paper about mobile and stationary eye tracking research has been done in the marketing field.


Sign in / Sign up

Export Citation Format

Share Document