scholarly journals What Does Visual Gaze Attend to during Driving?

Author(s):  
Mohsen Shirpour ◽  
Steven Beauchemin ◽  
Michael Bauer
Keyword(s):  
Author(s):  
Tanja Munz ◽  
Noel Schaefer ◽  
Tanja Blascheck ◽  
Kuno Kurzhals ◽  
Eugene Zhang ◽  
...  

HPB ◽  
2020 ◽  
Author(s):  
Chetanya Sharma ◽  
Harsmirat Singh ◽  
Felipe Orihuela-Espina ◽  
Ara Darzi ◽  
Mikael H. Sodergren

Semiotica ◽  
2021 ◽  
Vol 0 (0) ◽  
Author(s):  
James Batcho

Abstract Stanley Kubrick is regarded as a filmmaker of complex imagery. Yet the vitality of his more metaphysical works lies in what is unseen. There is an embodiment to Kubrick’s films that maintains a sense of subjectivity, but one which is unapparent and non-visual. This opens another way into Kubrick’s works, that of conditions of audibility (hearing/listening), affectivity, and signs. To think of embodiment from such an audible perspective requires one to subvert film spectatorship (the frame) and instead enter the reality of the film’s immanent, borderless unfolding as itself. This essay applies Gilles Deleuze’s semiotic concepts of cinema, metaphysics, and subjectivity to conditions of audibility and unseeing, a connection Deleuze largely ignored in his writings. These dual concepts of audibility and unseeing break prevailing analytical norms in cinema discourse that affirm limitations via material, visual, textual, and spatial reification: subjective-objective delineations, the body and the gaze, sound as necessarily spatial/material, and the dominance of images in regard to aesthetics, surveillance, and evidence. Instead, this essay moves through Kubrick’s constructions of milieu that are unseen in the midst of an otherwise visual unfolding, and audible in the midst of an otherwise sonic unfolding. To consider Kubrick’s films through their audible embodiment, one must detach (1) the microphone from its adherence to space, (2) the body from its visual gaze. Here, sounds, images, and objects become secondary to hearing and signs in a temporal unfolding, resulting in a cinema that is experiential rather than representational. This opens to an actuality of spirit within the world of the film, offering new opportunities for creativity in the cinematic form.


2011 ◽  
Vol 106 (6) ◽  
pp. 1070-1074 ◽  
Author(s):  
Cristina Almansa ◽  
Muhammad W Shahid ◽  
Michael G Heckman ◽  
Susan Preissler ◽  
Michael B Wallace

Author(s):  
Shujian Yu ◽  
Weihua Ou ◽  
Xinge You ◽  
Xiubao Jiang ◽  
Yun Zhu ◽  
...  
Keyword(s):  

Endoscopy ◽  
2018 ◽  
Vol 50 (07) ◽  
pp. 701-707 ◽  
Author(s):  
Mariam Lami ◽  
Harsimrat Singh ◽  
James Dilley ◽  
Hajra Ashraf ◽  
Matthew Edmondon ◽  
...  

Abstract Background The adenoma detection rate (ADR) is an important quality indicator in colonoscopy. The aim of this study was to evaluate the changes in visual gaze patterns (VGPs) with increasing polyp detection rate (PDR), a surrogate marker of ADR. Methods 18 endoscopists participated in the study. VGPs were measured using eye-tracking technology during the withdrawal phase of colonoscopy. VGPs were characterized using two analyses – screen and anatomy. Eye-tracking parameters were used to characterize performance, which was further substantiated using hidden Markov model (HMM) analysis. Results Subjects with higher PDRs spent more time viewing the outer ring of the 3 × 3 grid for both analyses (screen-based: r = 0.56, P = 0.02; anatomy: r = 0.62, P < 0.01). Fixation distribution to the “bottom U” of the screen in screen-based analysis was positively correlated with PDR (r = 0.62, P = 0.01). HMM demarcated the VGPs into three PDR groups. Conclusion This study defined distinct VGPs that are associated with expert behavior. These data may allow introduction of visual gaze training within structured training programs, and have implications for adoption in higher-level assessment.


Author(s):  
Ory Medina ◽  
Daniel Madrigal ◽  
Félix Ramos ◽  
Gustavo Torres ◽  
Marco Ramos

In humans, the vestibular system along with other sensory and motor systems is responsible for three cognitive functions that support mobility. First, is responsible for the balance of the body. Second, it allows humans to maintain the head stabilized. Finally, whenever the body or head are in motion, it maintains the visual gaze on a desired target. These tasks are performed using an array of sensors that are located within the inner ear. This paper describes the design and implementation of a synthetic model of the human vestibular system. The model is based on neurophysiological evidence, which makes it necessary to model all of the neural and physical components involved in the balance of the body. The model includes a component for each of the sensors, cortical and subcortical neural structures. It also defines and generates the necessary motor output signals. The proposed model was connected to a Bioloid® Premium humanoid robot to simulate the motor output and the proprioceptive inputs. The physical tests resulted inconclusive due to the fact that the controller on the robot was incapable of handling the necessary information for the tests. However, even though the results were not the desired, the communication between the sensors and the architecture, as well as the processing inside the architecture satisfied all of the authors' expectations.


Author(s):  
Izabela Krejtz ◽  
Krzysztof Krejtz ◽  
Katarzyna Wisiecka ◽  
Marta Abramczyk ◽  
Michał Olszanowski ◽  
...  

Abstract The enhancement hypothesis suggests that deaf individuals are more vigilant to visual emotional cues than hearing individuals. The present eye-tracking study examined ambient–focal visual attention when encoding affect from dynamically changing emotional facial expressions. Deaf (n = 17) and hearing (n = 17) individuals watched emotional facial expressions that in 10-s animations morphed from a neutral expression to one of happiness, sadness, or anger. The task was to recognize emotion as quickly as possible. Deaf participants tended to be faster than hearing participants in affect recognition, but the groups did not differ in accuracy. In general, happy faces were more accurately and more quickly recognized than faces expressing anger or sadness. Both groups demonstrated longer average fixation duration when recognizing happiness in comparison to anger and sadness. Deaf individuals directed their first fixations less often to the mouth region than the hearing group. During the last stages of emotion recognition, deaf participants exhibited more focal viewing of happy faces than negative faces. This pattern was not observed among hearing individuals. The analysis of visual gaze dynamics, switching between ambient and focal attention, was useful in studying the depth of cognitive processing of emotional information among deaf and hearing individuals.


Sign in / Sign up

Export Citation Format

Share Document