scholarly journals Virtual Prospecting in Paleontology Using a Drone-Based Orthomosaic Map: An Eye Movement Analysis

2021 ◽  
Vol 10 (11) ◽  
pp. 753
Author(s):  
Tanya Beelders ◽  
Gavin Dollman

Paleontological fieldwork is often a time-consuming process and resource intensive. With unexplored and remote areas, the satellite images, geology, and topography of an area are analyzed to help survey for a site. A drone-based orthomosaic map is suggested as an additional tool for virtual paleontology fossil prospecting. The use of an orthomosaic map was compared to the use of a typical satellite map when looking for fossil sites to prospect. Factors were chosen for their impact when prospecting for a fossil site and availability of data. Eye movement data were captured for a convenience sample of paleontologists from a local university. Each band within the satellite map measures 7741 × 7821 with a ground resolution of 30 m/pix, and the ground resolution of the orthomosaic map is 2.86 cm/pix with a resolution of 52,634 × 32,383. Experts displayed a gaze behavior suggestive of high analysis levels as well as being able to identify and analyze features rapidly—this is illustrated through the presence of both longer and shorter fixations. However, experts appeared to look at both maps in more detail than novices. The orthomosaic map was very successful at both attracting and keeping the attention of the map reader on certain features. It was concluded that an orthomosaic-based drone map used in conjunction with a satellite map is a useful tool for high spatial density virtual prospecting for novices and experts.

Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5178
Author(s):  
Sangbong Yoo ◽  
Seongmin Jeong ◽  
Seokyeon Kim ◽  
Yun Jang

Gaze movement and visual stimuli have been utilized to analyze human visual attention intuitively. Gaze behavior studies mainly show statistical analyses of eye movements and human visual attention. During these analyses, eye movement data and the saliency map are presented to the analysts as separate views or merged views. However, the analysts become frustrated when they need to memorize all of the separate views or when the eye movements obscure the saliency map in the merged views. Therefore, it is not easy to analyze how visual stimuli affect gaze movements since existing techniques focus excessively on the eye movement data. In this paper, we propose a novel visualization technique for analyzing gaze behavior using saliency features as visual clues to express the visual attention of an observer. The visual clues that represent visual attention are analyzed to reveal which saliency features are prominent for the visual stimulus analysis. We visualize the gaze data with the saliency features to interpret the visual attention. We analyze the gaze behavior with the proposed visualization to evaluate that our approach to embedding saliency features within the visualization supports us to understand the visual attention of an observer.


Author(s):  
Gavindya Jayawardena ◽  
Sampath Jayarathna

Eye-tracking experiments involve areas of interest (AOIs) for the analysis of eye gaze data. While there are tools to delineate AOIs to extract eye movement data, they may require users to manually draw boundaries of AOIs on eye tracking stimuli or use markers to define AOIs. This paper introduces two novel techniques to dynamically filter eye movement data from AOIs for the analysis of eye metrics from multiple levels of granularity. The authors incorporate pre-trained object detectors and object instance segmentation models for offline detection of dynamic AOIs in video streams. This research presents the implementation and evaluation of object detectors and object instance segmentation models to find the best model to be integrated in a real-time eye movement analysis pipeline. The authors filter gaze data that falls within the polygonal boundaries of detected dynamic AOIs and apply object detector to find bounding-boxes in a public dataset. The results indicate that the dynamic AOIs generated by object detectors capture 60% of eye movements & object instance segmentation models capture 30% of eye movements.


Author(s):  
Jennifer Smith ◽  
Geoff Long ◽  
Peter Dawes ◽  
Oliver Runswick ◽  
Michael Tipton

Surveillance is key to the lifesaving capability of lifeguards. Experienced personnel consistently display enhanced hazard detection capabilities compared to less experienced counterparts. However, the mechanisms which underpin this effect and the time it takes to develop these skills are not understood. We hypothesized that, after one season of experience, the number of hazards detected by, and eye movements of, less experienced lifeguards (LEL) would more closely approximate experienced lifeguards (EL). The LEL watched ‘beach scene’ videos at the beginning and end of their first season. The number of hazards detected and eye-movement data were collected and compared to the EL group. The LEL perceived fewer hazards than EL and did not increase over the season. There was no difference in eye-movements between groups. Findings suggest one season is not enough for lifeguards to develop enhanced hazard detection skills and skill level differences are not underpinned by differences in gaze behavior.


Author(s):  
Liang Sun ◽  
Hua Shao ◽  
Shuyang Li ◽  
Xiaoxun Huang ◽  
Wenyan Yang

Beauty estimation is a common method for landscape quality estimation, although it has some limitations. With eye tracker, the visual behaviors of the subjects during the estimation can be recorded. Through the analyses of heat maps, path maps and eye movement data, the psychological changes of the subjects and the underlying law of beauty aesthetic can be understood, which will provide supplementation to beauty estimation. This paper studied the beauty estimation of urban waterfront parks and proofed that the landscape quality estimation method focussing on beauty estimation and assisted by eye movement tracking is feasible. It can improve the objectiveness and accuracy of landscape quality estimation to some extent and provide a comprehensive understanding of the effects and combination law of landscape characteristic elements.


Complexity ◽  
2019 ◽  
Vol 2019 ◽  
pp. 1-12 ◽  
Author(s):  
Xiaoming Wang ◽  
Xinbo Zhao ◽  
Jinchang Ren

Traditional eye movement models are based on psychological assumptions and empirical data that are not able to simulate eye movement on previously unseen text data. To address this problem, a new type of eye movement model is presented and tested in this paper. In contrast to conventional psychology-based eye movement models, ours is based on a recurrent neural network (RNN) to generate a gaze point prediction sequence, by using the combination of convolutional neural networks (CNN), bidirectional long short-term memory networks (LSTM), and conditional random fields (CRF). The model uses the eye movement data of a reader reading some texts as training data to predict the eye movements of the same reader reading a previously unseen text. A theoretical analysis of the model is presented to show its excellent convergence performance. Experimental results are then presented to demonstrate that the proposed model can achieve similar prediction accuracy while requiring fewer features than current machine learning models.


Author(s):  
Saptarshi Mandal ◽  
Ziho Kang ◽  
Angel Millan

Visualization approaches for eye movement analysis suffers from two limitations: (1) inability to handle the stochasticity of both number and position of moving areas of interests (AOIs), and (2) absence of quantitative metrics to analyze eye movement data. We adapted the directed weighted network (DWN) and associated “centrality” metrics to support the visualization of the complex eye movement data. A case study was performed using a realistic air traffic control task environment. Promising results were found as we were able identify important targets (aircraft) interrogated by an air traffic controller based on different time frames. This case study serves as a foundation to develop effective data visualization methods and quantitative metrics for analyzing complex eye movements for a multi-element tracking task.


2019 ◽  
Author(s):  
Martin Schoemann ◽  
Michael Schulte-Mecklenbeck ◽  
Frank Renkewitz ◽  
Stefan Scherbaum

The study of cognitive processes is built on a close mapping between three components: overt gaze behavior - overt choice - covert processes. To validate this overt-covert mapping in the domain of decision making we collected eye-movement data during decisions between risky gamble problems. Applying a forward inference paradigm, participants were instructed to use specific decision strategies to solve those gamble problems (maximizing expected values or applying different choice heuristics) during which gaze behavior was recorded. We revealed differences between overt behavior, as indicated by eye movements, and covert decision processes, instructed by the experimenter. However, our results indicate that the overt-covert mapping is for some eye-movement measures not as close as expected by current decision theory, and hence question reverse inference as being prone to fallacies due to a violation of its prerequisite, that is, a close overt-covert mapping. We propose a framework to rehabilitate reverse inference.


2019 ◽  
Vol 24 (4) ◽  
pp. 297-311
Author(s):  
José David Moreno ◽  
José A. León ◽  
Lorena A. M. Arnal ◽  
Juan Botella

Abstract. We report the results of a meta-analysis of 22 experiments comparing the eye movement data obtained from young ( Mage = 21 years) and old ( Mage = 73 years) readers. The data included six eye movement measures (mean gaze duration, mean fixation duration, total sentence reading time, mean number of fixations, mean number of regressions, and mean length of progressive saccade eye movements). Estimates were obtained of the typified mean difference, d, between the age groups in all six measures. The results showed positive combined effect size estimates in favor of the young adult group (between 0.54 and 3.66 in all measures), although the difference for the mean number of fixations was not significant. Young adults make in a systematic way, shorter gazes, fewer regressions, and shorter saccadic movements during reading than older adults, and they also read faster. The meta-analysis results confirm statistically the most common patterns observed in previous research; therefore, eye movements seem to be a useful tool to measure behavioral changes due to the aging process. Moreover, these results do not allow us to discard either of the two main hypotheses assessed for explaining the observed aging effects, namely neural degenerative problems and the adoption of compensatory strategies.


Sign in / Sign up

Export Citation Format

Share Document