visual angle
Recently Published Documents


TOTAL DOCUMENTS

278
(FIVE YEARS 34)

H-INDEX

30
(FIVE YEARS 2)

Mathematics ◽  
2021 ◽  
Vol 9 (22) ◽  
pp. 2879
Author(s):  
Hongxia Ge ◽  
Siteng Li ◽  
Chunyue Yan

With the continuous advancement of electronic technology, auto parts manufacturing institutions are gradually applying electronic throttles to automobiles for precise control. Based on the visual angle model (VAM), a car-following model considering the electronic throttle angle of the preceding vehicle is proposed. The stability conditions are obtained through linear stability analysis. By means of nonlinear analysis, the time-dependent Ginzburg–Landau (TDGL) equation is derived first, and then the modified Korteweg-de-Vries (mKdV) equation is derived. The relationship between the two is thus obtained. Finally, in the process of numerical simulations and exploration, it is shown how the visual angle and electronic throttle affect the stability of traffic flow. The simulation results in MATLAB software verify the validity of the model, indicating that the visual angle and electronic throttle can improve traffic stability.


Author(s):  
Frank A. Perez ◽  
Bong J. Walsh

In recent litigation Human Factors Experts have been misapplying the analysis of the looming threshold for offset motions. Looming (or image size expansion) analysis is appropriate for a rapid direct approach to an object (i.e., rear-end collisions) but is inappropriate for offset motions. Typically, looming threshold analysis is applied to nighttime driving when approaching a slow-moving or stopped vehicle presenting no visual cues other than rear tail lights. This paper lays out the foundation for looming, derives the accepted mathematical equation, and compares it to the rate of visual angle change, which is more applicable to offset motions. An appropriate offset looming threshold equation is derived. In addition, a special case of collision due to looming combined with lateral motion is addressed which has historical significance in open water vessel navigation.


2021 ◽  
Author(s):  
Ying Zhao ◽  
Yang He ◽  
Jiaqi Jing ◽  
Tie Wang ◽  
Guangmiao Jiang

2021 ◽  
Vol 2 ◽  
Author(s):  
Sebastian Oberdörfer ◽  
David Heidrich ◽  
Sandra Birnstiel ◽  
Marc Erich Latoschik

Impaired decision-making leads to the inability to distinguish between advantageous and disadvantageous choices. The impairment of a person’s decision-making is a common goal of gambling games. Given the recent trend of gambling using immersive Virtual Reality it is crucial to investigate the effects of both immersion and the virtual environment (VE) on decision-making. In a novel user study, we measured decision-making using three virtual versions of the Iowa Gambling Task (IGT). The versions differed with regard to the degree of immersion and design of the virtual environment. While emotions affect decision-making, we further measured the positive and negative affect of participants. A higher visual angle on a stimulus leads to an increased emotional response. Thus, we kept the visual angle on the Iowa Gambling Task the same between our conditions. Our results revealed no significant impact of immersion or the VE on the IGT. We further found no significant difference between the conditions with regard to positive and negative affect. This suggests that neither the medium used nor the design of the VE causes an impairment of decision-making. However, in combination with a recent study, we provide first evidence that a higher visual angle on the IGT leads to an effect of impairment.


2021 ◽  
Vol 15 ◽  
Author(s):  
Niklas Zdarsky ◽  
Stefan Treue ◽  
Moein Esghaei

Real-time gaze tracking provides crucial input to psychophysics studies and neuromarketing applications. Many of the modern eye-tracking solutions are expensive mainly due to the high-end processing hardware specialized for processing infrared-camera pictures. Here, we introduce a deep learning-based approach which uses the video frames of low-cost web cameras. Using DeepLabCut (DLC), an open-source toolbox for extracting points of interest from videos, we obtained facial landmarks critical to gaze location and estimated the point of gaze on a computer screen via a shallow neural network. Tested for three extreme poses, this architecture reached a median error of about one degree of visual angle. Our results contribute to the growing field of deep-learning approaches to eye-tracking, laying the foundation for further investigation by researchers in psychophysics or neuromarketing.


2021 ◽  
Vol 566 ◽  
pp. 125665
Author(s):  
Nan Jiang ◽  
Bin Yu ◽  
Feng Cao ◽  
Pengfei Dang ◽  
Shaohua Cui

i-Perception ◽  
2021 ◽  
Vol 12 (1) ◽  
pp. 204166952098333 ◽  
Author(s):  
Niklas Stein ◽  
Diederick C. Niehorster ◽  
Tamara Watson ◽  
Frank Steinicke ◽  
Katharina Rifai ◽  
...  

A number of virtual reality head-mounted displays (HMDs) with integrated eye trackers have recently become commercially available. If their eye tracking latency is low and reliable enough for gaze-contingent rendering, this may open up many interesting opportunities for researchers. We measured eye tracking latencies for the Fove-0, the Varjo VR-1, and the High Tech Computer Corporation (HTC) Vive Pro Eye using simultaneous electrooculography measurements. We determined the time from the occurrence of an eye position change to its availability as a data sample from the eye tracker (delay) and the time from an eye position change to the earliest possible change of the display content (latency). For each test and each device, participants performed 60 saccades between two targets 20° of visual angle apart. The targets were continuously visible in the HMD, and the saccades were instructed by an auditory cue. Data collection and eye tracking calibration were done using the recommended scripts for each device in Unity3D. The Vive Pro Eye was recorded twice, once using the SteamVR SDK and once using the Tobii XR SDK. Our results show clear differences between the HMDs. Delays ranged from 15 ms to 52 ms, and the latencies ranged from 45 ms to 81 ms. The Fove-0 appears to be the fastest device and best suited for gaze-contingent rendering.


Sign in / Sign up

Export Citation Format

Share Document