INTERDISCOURSE SWITCHING IN DRAMA TEXT. GAZE PATTERNS AND COMPREHENSION CHECK RESULTS

Author(s):  
M.I. Kiose ◽  
◽  
A.A. Rzheshevskaya ◽  

The study explores the cognitive process of interdiscourse switching which occurs in reading drama plays with the author’s discourse fragments incorporated (Areas of Interest). The oculographic experiment reveals the gaze patterns and the discourse interpretation patterns, more and less typical of the process. The experiment is preceded by the parametric and annotation analysis of interdiscourse switching construal. Interestingly, there exist several construal parameter groups contingent with eye movement load redistribution, among them are Participant construal, Event construal, and Perspective construal. The results sufficed to show that construal effects also affect mentioning Areas of Interest in the subjects’ responses, the most significant influence is displayed by Participant Agentivity and Complexity parameters as well as by Event Type parameters.

2020 ◽  
Vol 13 (4) ◽  
Author(s):  
Ioannis Agtzidis ◽  
Mikhail Startsev ◽  
Michael Dorr

In this short article we present our manual annotation of the eye movement events in a subset of the large-scale eye tracking data set Hollywood2. Our labels include fixations, saccades, and smooth pursuits, as well as a noise event type (the latter representing either blinks, loss of tracking, or physically implausible signals). In order to achieve more consistent annotations, the gaze samples were labelled by a novice rater based on rudimentary algorithmic suggestions, and subsequently corrected by an expert rater. Overall, we annotated eye movement events in the recordings corresponding to 50 randomly selected test set clips and 6 training set clips from Hollywood2, which were viewed by 16 observers and amount to a total of approximately 130 minutes of gaze data. In these labels, 62.4% of the samples were attributed to fixations, 9.1% – to saccades, and, notably, 24.2% – to pursuit (the remainder marked as noise). After evaluation of 15 published eye movement classification algorithms on our newly collected annotated data set, we found that the most recent algorithms perform very well on average, and even reach human-level labelling quality for fixations and saccades, but all have a much larger room for improvement when it comes to smooth pursuit classification. The data set is made available at https://gin.g- node.org/ioannis.agtzidis/hollywood2_em.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jordan Navarro ◽  
Otto Lappi ◽  
François Osiurak ◽  
Emma Hernout ◽  
Catherine Gabaude ◽  
...  

AbstractActive visual scanning of the scene is a key task-element in all forms of human locomotion. In the field of driving, steering (lateral control) and speed adjustments (longitudinal control) models are largely based on drivers’ visual inputs. Despite knowledge gained on gaze behaviour behind the wheel, our understanding of the sequential aspects of the gaze strategies that actively sample that input remains restricted. Here, we apply scan path analysis to investigate sequences of visual scanning in manual and highly automated simulated driving. Five stereotypical visual sequences were identified under manual driving: forward polling (i.e. far road explorations), guidance, backwards polling (i.e. near road explorations), scenery and speed monitoring scan paths. Previously undocumented backwards polling scan paths were the most frequent. Under highly automated driving backwards polling scan paths relative frequency decreased, guidance scan paths relative frequency increased, and automation supervision specific scan paths appeared. The results shed new light on the gaze patterns engaged while driving. Methodological and empirical questions for future studies are discussed.


Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5178
Author(s):  
Sangbong Yoo ◽  
Seongmin Jeong ◽  
Seokyeon Kim ◽  
Yun Jang

Gaze movement and visual stimuli have been utilized to analyze human visual attention intuitively. Gaze behavior studies mainly show statistical analyses of eye movements and human visual attention. During these analyses, eye movement data and the saliency map are presented to the analysts as separate views or merged views. However, the analysts become frustrated when they need to memorize all of the separate views or when the eye movements obscure the saliency map in the merged views. Therefore, it is not easy to analyze how visual stimuli affect gaze movements since existing techniques focus excessively on the eye movement data. In this paper, we propose a novel visualization technique for analyzing gaze behavior using saliency features as visual clues to express the visual attention of an observer. The visual clues that represent visual attention are analyzed to reveal which saliency features are prominent for the visual stimulus analysis. We visualize the gaze data with the saliency features to interpret the visual attention. We analyze the gaze behavior with the proposed visualization to evaluate that our approach to embedding saliency features within the visualization supports us to understand the visual attention of an observer.


2020 ◽  
Vol 57 (12) ◽  
pp. 1392-1401
Author(s):  
Mark P. Pressler ◽  
Emily L. Geisler ◽  
Rami R. Hallac ◽  
James R. Seaward ◽  
Alex A. Kane

Introduction and Objectives: Surgical treatment for trigonocephaly aims to eliminate a stigmatizing deformity, yet the severity that captures unwanted attention is unknown. Surgeons intervene at different points of severity, eliciting controversy. This study used eye tracking to investigate when deformity is perceived. Material and Methods: Three-dimensional photogrammetric images of a normal child and a child with trigonocephaly were mathematically deformed, in 10% increments, to create a spectrum of 11 images. These images were shown to participants using an eye tracker. Participants’ gaze patterns were analyzed, and participants were asked if each image looked “normal” or “abnormal.” Results: Sixty-six graduate students were recruited. Average dwell time toward pathologic areas of interest (AOIs) increased proportionally, from 0.77 ± 0.33 seconds at 0% deformity to 1.08 ± 0.75 seconds at 100% deformity ( P < .0001). A majority of participants did not agree an image looked “abnormal” until 90% deformity from any angle. Conclusion: Eye tracking can be used as a proxy for attention threshold toward orbitofrontal deformity. The amount of attention toward orbitofrontal AOIs increased proportionally with severity. Participants did not generally agree there was “abnormality” until deformity was severe. This study supports the assertion that surgical intervention may be best reserved for more severe deformity.


1995 ◽  
Vol 73 (4) ◽  
pp. 1632-1652 ◽  
Author(s):  
J. O. Phillips ◽  
L. Ling ◽  
A. F. Fuchs ◽  
C. Siebold ◽  
J. J. Plorde

1. We studied horizontal eye and head movements in three monkeys that were trained to direct their gaze (eye position in space) toward jumping targets while their heads were both fixed and free to rotate about a vertical axis. We considered all gaze movements that traveled > or = 80% of the distance to the new visual target. 2. The relative contributions and metrics of eye and head movements to the gaze shift varied considerably from animal to animal and even within animals. Head movements could be initiated early or late and could be large or small. The eye movements of some monkeys showed a consistent decrease in velocity as the head accelerated, whereas others did not. Although all gaze shifts were hypometric, they were more hypometric in some monkeys than in others. Nevertheless, certain features of the gaze shift were identifiable in all monkeys. To identify those we analyzed gaze, eye in head position, and head position, and their velocities at three points in time during the gaze shift: 1) when the eye had completed its initial rotation toward the target, 2) when the initial gaze shift had landed, and 3) when the head movement was finished. 3. For small gaze shifts (< 20 degrees) the initial gaze movement consisted entirely of an eye movement because the head did not move. As gaze shifts became larger, the eye movement contribution saturated at approximately 30 degrees and the head movement contributed increasingly to the initial gaze movement. For the largest gaze shifts, the eye usually began counterrolling or remained stable in the orbit before gaze landed. During the interval between eye and gaze end, the head alone carried gaze to completion. Finally, when the head movement landed, it was almost aimed at the target and the eye had returned to within 10 +/- 7 degrees, mean +/- SD, of straight ahead. Between the end of the gaze shift and the end of the head movement, gaze remained stable in space or a small correction saccade occurred. 4. Gaze movements < 20 degrees landed accurately on target whether the head was fixed or free. For larger target movements, both head-free and head-fixed gaze shifts became increasingly hypometric. Head-free gaze shifts were more accurate, on average, but also more variable. This suggests that gaze is controlled in a different way with the head free. For target amplitudes < 60 degrees, head position was hypometric but the error was rather constant at approximately 10 degrees.(ABSTRACT TRUNCATED AT 400 WORDS)


2020 ◽  
pp. 073563312097861
Author(s):  
Marko Pejić ◽  
Goran Savić ◽  
Milan Segedinac

This study proposes a software system for determining gaze patterns in on-screen testing. The system applies machine learning techniques to eye-movement data obtained from an eye-tracking device to categorize students according to their gaze behavior pattern while solving an on-screen test. These patterns are determined by converting eye movement coordinates into a sequence of regions of interest. The proposed software system extracts features from the sequence and performs clustering that groups students by their gaze pattern. To determine gaze patterns, the system contains components for communicating with an eye-tracking device, collecting and preprocessing students’ gaze data, and visualizing data using different presentation methods. This study presents a methodology to determine gaze patterns and the implementation details of the proposed software. The research was evaluated by determining the gaze patterns of 51 undergraduate students who took a general knowledge test containing 20 questions. This study aims to provide a software infrastructure that can use students’ gaze patterns as an additional indicator of their reading behaviors and their processing attention or difficulty, among other factors.


2018 ◽  
Vol 29 (11) ◽  
pp. 1878-1889 ◽  
Author(s):  
Minou Ghaffari ◽  
Susann Fiedler

According to research studying the processes underlying decisions, a two-channel mechanism connects attention and choices: top-down and bottom-up processes. To identify the magnitude of each channel, we exogenously varied information intake by systematically interrupting participants’ decision processes in Study 1 ( N = 116). Results showed that participants were more likely to choose a predetermined target option. Because selection effects limited the interpretation of the results, we used a sequential-presentation paradigm in Study 2 (preregistered, N = 100). To partial out bottom-up effects of attention on choices, in particular, we presented alternatives by mirroring the gaze patterns of autonomous decision makers. Results revealed that final fixations successfully predicted choices when experimentally manipulated (bottom up). Specifically, up to 11.32% of the link between attention and choices is driven by exogenously guided attention (1.19% change in choices overall), while the remaining variance is explained by top-down preference formation.


Vision ◽  
2021 ◽  
Vol 5 (3) ◽  
pp. 39
Author(s):  
Julie Royo ◽  
Fabrice Arcizet ◽  
Patrick Cavanagh ◽  
Pierre Pouget

We introduce a blind spot method to create image changes contingent on eye movements. One challenge of eye movement research is triggering display changes contingent on gaze. The eye-tracking system must capture the image of the eye, discover and track the pupil and corneal reflections to estimate the gaze position, and then transfer this data to the computer that updates the display. All of these steps introduce delays that are often difficult to predict. To avoid these issues, we describe a simple blind spot method to generate gaze contingent display manipulations without any eye-tracking system and/or display controls.


2021 ◽  
Vol 5 ◽  
Author(s):  
Christian Kosel ◽  
Doris Holzberger ◽  
Tina Seidel

The paper addresses cognitive processes during a teacher's professional task of assessing learning-relevant student characteristics. We explore how eye-movement patterns (scanpaths) differ across expert and novice teachers during an assessment situation. In an eye-tracking experiment, participants watched an authentic video of a classroom lesson and were subsequently asked to assess five different students. Instead of using typically reported averaged gaze data (e.g., number of fixations), we used gaze patterns as an indicator for visual behavior. We extracted scanpath patterns, compared them qualitatively (common sub-pattern) and quantitatively (scanpath entropy) between experts and novices, and related teachers' visual behavior to their assessment competence. Results show that teachers' scanpaths were idiosyncratic and more similar to teachers of the same expertise group. Moreover, experts monitored all target students more regularly and made recurring scans to re-adjust their assessment. Lastly, this behavior was quantified using Shannon's entropy score. Results indicate that experts' scanpaths were more complex, involved more frequent revisits of all students, and that experts transferred their attention between all students with equal probability. Experts' visual behavior was also statistically related to higher judgment accuracy.


2021 ◽  
Author(s):  
Simona Skripkauskaite ◽  
Ioana Mihai ◽  
Kami Koldewyn

Human visual attention is readily captured by the social information in scenes. Multiple studies have shown that social areas of interest (AOIs) such as faces and bodies attract more attention than non-social AOIs (e.g. objects or background). However, whether this attentional bias is moderated by the presence (or absence) of a social interaction remains unclear. Here, the gaze of 70 young adults was tracked during the free viewing of 60 naturalistic scenes. All photographs depicted two people, who were either interacting or not. Analyses of dwell time revealed that more attention was spent on human than background AOIs in the interactive pictures. In non-interactive pictures, however, dwell time did not differ between AOI type. In the time-to-first-fixation analysis, humans always captured attention before other elements of the scene, although this difference was slightly larger in the interactive than non-interactive scenes. These findings confirm the existence of a bias towards social information in attentional capture, and suggest that the presence of social interaction may be important in inducing a similar social bias in attentional engagemente. Together with previous research using less naturalistic stimuli, these findings suggest that social interactions carry additional social value that guides one's perceptual system.


Sign in / Sign up

Export Citation Format

Share Document