scholarly journals Spontaneous eye-movements in neutral and emotional gaze-cuing: An eye-tracking investigation

Heliyon ◽  
2019 ◽  
Vol 5 (4) ◽  
pp. e01583 ◽  
Author(s):  
Sarah D. McCrackin ◽  
Sarika K. Soomal ◽  
Payal Patel ◽  
Roxane J. Itier
2020 ◽  
Author(s):  
David Harris ◽  
Mark Wilson ◽  
Tim Holmes ◽  
Toby de Burgh ◽  
Samuel James Vine

Head-mounted eye tracking has been fundamental for developing an understanding of sporting expertise, as the way in which performers sample visual information from the environment is a major determinant of successful performance. There is, however, a long running tension between the desire to study realistic, in-situ gaze behaviour and the difficulties of acquiring accurate ocular measurements in dynamic and fast-moving sporting tasks. Here, we describe how immersive technologies, such as virtual reality, offer an increasingly compelling approach for conducting eye movement research in sport. The possibility of studying gaze behaviour in representative and realistic environments, but with high levels of experimental control, could enable significant strides forward for eye tracking in sport and improve understanding of how eye movements underpin sporting skills. By providing a rationale for virtual reality as an optimal environment for eye tracking research, as well as outlining practical considerations related to hardware, software and data analysis, we hope to guide researchers and practitioners in the use of this approach.


Foods ◽  
2021 ◽  
Vol 10 (2) ◽  
pp. 354
Author(s):  
Jakub Berčík ◽  
Johana Paluchová ◽  
Katarína Neomániová

The appearance of food provides certain expectations regarding the harmonization of taste, delicacy, and overall quality, which subsequently affects not only the intake itself but also many other features of the behavior of customers of catering facilities. The main goal of this article is to find out what effect the visual design of food (waffles) prepared from the same ingredients and served in three different ways—a stone plate, street food style, and a white classic plate—has on the consumer’s preferences. In addition to the classic tablet assistance personal interview (TAPI) tools, biometric methods such as eye tracking and face reading were used in order to obtain unconscious feedback. During testing, air quality in the room by means of the Extech device and the influence of the visual design of food on the perception of its smell were checked. At the end of the paper, we point out the importance of using classical feedback collection techniques (TAPI) and their extension in measuring subconscious reactions based on monitoring the eye movements and facial expressions of the respondents, which provides a whole new perspective on the perception of visual design and serving food as well as more effective targeting and use of corporate resources.


Healthcare ◽  
2020 ◽  
Vol 9 (1) ◽  
pp. 10
Author(s):  
Chong-Bin Tsai ◽  
Wei-Yu Hung ◽  
Wei-Yen Hsu

Optokinetic nystagmus (OKN) is an involuntary eye movement induced by motion of a large proportion of the visual field. It consists of a “slow phase (SP)” with eye movements in the same direction as the movement of the pattern and a “fast phase (FP)” with saccadic eye movements in the opposite direction. Study of OKN can reveal valuable information in ophthalmology, neurology and psychology. However, the current commercially available high-resolution and research-grade eye tracker is usually expensive. Methods & Results: We developed a novel fast and effective system combined with a low-cost eye tracking device to accurately quantitatively measure OKN eye movement. Conclusions: The experimental results indicate that the proposed method achieves fast and promising results in comparisons with several traditional approaches.


2020 ◽  
pp. 1-27
Author(s):  
Katja I. Haeuser ◽  
Shari Baum ◽  
Debra Titone

Abstract Comprehending idioms (e.g., bite the bullet) requires that people appreciate their figurative meanings while suppressing literal interpretations of the phrase. While much is known about idioms, an open question is how healthy aging and noncanonical form presentation affect idiom comprehension when the task is to read sentences silently for comprehension. Here, younger and older adults read sentences containing idioms or literal phrases, while we monitored their eye movements. Idioms were presented in a canonical or a noncanonical form (e.g., bite the iron bullet). To assess whether people integrate figurative or literal interpretations of idioms, a disambiguating region that was figuratively or literally biased followed the idiom in each sentence. During early stages of reading, older adults showed facilitation for canonical idioms, suggesting a greater sensitivity to stored idiomatic forms. During later stages of reading, older adults showed slower reading times when canonical idioms were biased toward their literal interpretation, suggesting they were more likely to interpret idioms figuratively on the first pass. In contrast, noncanonical form presentation slowed comprehension of figurative meanings comparably in younger and older participants. We conclude that idioms may be more strongly entrenched in older adults, and that noncanonical form presentation slows comprehension of figurative meanings.


2021 ◽  
Author(s):  
Federico Carbone ◽  
Philipp Ellmerer ◽  
Marcel Ritter ◽  
Sabine Spielberger ◽  
Philipp Mahlknecht ◽  
...  

2021 ◽  
Vol 20 (2) ◽  
pp. 84-96
Author(s):  
Mitja Ružojčić ◽  
Zvonimir Galić ◽  
Antun Palanović ◽  
Maja Parmač Kovačić ◽  
Andreja Bubić

Abstract. To better understand the process of responding to the Conditional Reasoning Test for Aggression (CRT-A) and its implication for the test's use in personnel selection, we conducted two lab studies in which we compared test scores and eye movements of participants responding honestly and faking the test. Study 1 results showed that, although participants might try to respond differently to the CRT-A while faking, it remains an indirect and unfakeable measure as long as the test's purpose is undisclosed. Study 2 showed that revealing the true purpose of the CRT-A diminishes the test's indirect nature so the test becomes fakeable, solving it requires less attention and participants direct their eyes more to response alternatives congruent with the presentational demands.


2021 ◽  
pp. 1-26
Author(s):  
Jan-Louis Kruger ◽  
Natalia Wisniewska ◽  
Sixin Liao

Abstract High subtitle speed undoubtedly impacts the viewer experience. However, little is known about how fast subtitles might impact the reading of individual words. This article presents new findings on the effect of subtitle speed on viewers’ reading behavior using word-based eye-tracking measures with specific attention to word skipping and rereading. In multimodal reading situations such as reading subtitles in video, rereading allows people to correct for oculomotor error or comprehension failure during linguistic processing or integrate words with elements of the image to build a situation model of the video. However, the opportunity to reread words, to read the majority of the words in the subtitle and to read subtitles to completion, is likely to be compromised when subtitles are too fast. Participants watched videos with subtitles at 12, 20, and 28 characters per second (cps) while their eye movements were recorded. It was found that comprehension declined as speed increased. Eye movement records also showed that faster subtitles resulted in more incomplete reading of subtitles. Furthermore, increased speed also caused fewer words to be reread following both horizontal eye movements (likely resulting in reduced lexical processing) and vertical eye movements (which would likely reduce higher-level comprehension and integration).


PLoS ONE ◽  
2021 ◽  
Vol 16 (1) ◽  
pp. e0245777
Author(s):  
Fanny Poncet ◽  
Robert Soussignan ◽  
Margaux Jaffiol ◽  
Baptiste Gaudelus ◽  
Arnaud Leleu ◽  
...  

Recognizing facial expressions of emotions is a fundamental ability for adaptation to the social environment. To date, it remains unclear whether the spatial distribution of eye movements predicts accurate recognition or, on the contrary, confusion in the recognition of facial emotions. In the present study, we asked participants to recognize facial emotions while monitoring their gaze behavior using eye-tracking technology. In Experiment 1a, 40 participants (20 women) performed a classic facial emotion recognition task with a 5-choice procedure (anger, disgust, fear, happiness, sadness). In Experiment 1b, a second group of 40 participants (20 women) was exposed to the same materials and procedure except that they were instructed to say whether (i.e., Yes/No response) the face expressed a specific emotion (e.g., anger), with the five emotion categories tested in distinct blocks. In Experiment 2, two groups of 32 participants performed the same task as in Experiment 1a while exposed to partial facial expressions composed of actions units (AUs) present or absent in some parts of the face (top, middle, or bottom). The coding of the AUs produced by the models showed complex facial configurations for most emotional expressions, with several AUs in common. Eye-tracking data indicated that relevant facial actions were actively gazed at by the decoders during both accurate recognition and errors. False recognition was mainly associated with the additional visual exploration of less relevant facial actions in regions containing ambiguous AUs or AUs relevant to other emotional expressions. Finally, the recognition of facial emotions from partial expressions showed that no single facial actions were necessary to effectively communicate an emotional state. In contrast, the recognition of facial emotions relied on the integration of a complex set of facial cues.


Author(s):  
Gavindya Jayawardena ◽  
Sampath Jayarathna

Eye-tracking experiments involve areas of interest (AOIs) for the analysis of eye gaze data. While there are tools to delineate AOIs to extract eye movement data, they may require users to manually draw boundaries of AOIs on eye tracking stimuli or use markers to define AOIs. This paper introduces two novel techniques to dynamically filter eye movement data from AOIs for the analysis of eye metrics from multiple levels of granularity. The authors incorporate pre-trained object detectors and object instance segmentation models for offline detection of dynamic AOIs in video streams. This research presents the implementation and evaluation of object detectors and object instance segmentation models to find the best model to be integrated in a real-time eye movement analysis pipeline. The authors filter gaze data that falls within the polygonal boundaries of detected dynamic AOIs and apply object detector to find bounding-boxes in a public dataset. The results indicate that the dynamic AOIs generated by object detectors capture 60% of eye movements & object instance segmentation models capture 30% of eye movements.


2021 ◽  
Author(s):  
Ana Pellicer-Sánchez ◽  
Anna Siyanova

Abstract The field of vocabulary research is witnessing a growing interest in the use of eye-tracking to investigate topics that have traditionally been examined using offline measures, providing new insights into the processing and learning of vocabulary. During an eye-tracking experiment, participants’ eye movements are recorded while they attend to written or auditory input, resulting in a rich record of online processing behaviour. Because of its many benefits, eye-tracking is becoming a major research technique in vocabulary research. However, before this emerging trend of eye-tracking based vocabulary research continues to proliferate, it is important to step back and reflect on what current studies have shown about the processing and learning of vocabulary, and the ways in which we can use the technique in future research. To this aim, the present paper provides a comprehensive overview of current eye-tracking research findings, both in terms of the processing and learning of single words and formulaic sequences. Current research gaps and potential avenues for future research are also discussed.


Sign in / Sign up

Export Citation Format

Share Document