scholarly journals Driver’s Preview Modeling Based on Visual Characteristics through Actual Vehicle Tests

Sensors ◽  
2020 ◽  
Vol 20 (21) ◽  
pp. 6237
Author(s):  
Hongyu Hu ◽  
Ming Cheng ◽  
Fei Gao ◽  
Yuhuan Sheng ◽  
Rencheng Zheng

This paper proposes a method for obtaining driver’s fixation points and establishing a preview model based on actual vehicle tests. Firstly, eight drivers were recruited to carry out the actual vehicle test on the actual straight and curved roads. The curvature radii of test curved roads were selected to be 200, 800, and 1500 m. Subjects were required to drive at a speed of 50, 70 and 90 km/h, respectively. During the driving process, eye movement data of drivers were collected using a head-mounted eye tracker, and road front scene images and vehicle statuses were collected simultaneously. An image-world coordinate mapping model of the visual information of drivers was constructed by performing an image distortion correction and matching the images from the driving recorder. Then, fixation point data for drivers were accordingly obtained using the Identification-Deviation Threshold (I-DT) algorithm. In addition, the Jarque–Bera test was used to verify the normal distribution characteristics of these data and to fit the distribution parameters of the normal function. Furthermore, the preview points were extracted accordingly and projected into the world coordinate. At last, the preview data obtained under these conditions are fit to build general preview time probability density maps for different driving speeds and road curvatures. This study extracts the preview characteristics of drivers through actual vehicle tests, which provides a visual behavior reference for the humanized vehicle control of an intelligent vehicle.

2018 ◽  
Vol 10 (1) ◽  
pp. 168781401771766
Author(s):  
Jieyu Fan ◽  
Shengdi Chen ◽  
Mingzhang Liang ◽  
Fengyuan Wang

The transportation system is synthesized by people, vehicles, roads, and environment, and people factor is initiative and plays a key role in the intermediate link between complex environment and vehicles. Virtual driving test is designed to research driver’s dynamic visual characteristics under different road conditions. This article applies faceLAB 5.0 eye tracker, Blue Tiger virtual driving device, and others to record eye movement changes of a driver in different road conditions, collect driver’s eye movement data, and analyze eye movement variation. Also, by comparing, checking, and disciplinary analyzing the measured data in different driving phases, this article conducts an analysis of eye movement changes of a driver while driving. In virtual driving test, drivers have low blinking frequency and long blinking duration in started section. When the section is complicated, drivers have increasing blinking frequency but shorter blinking duration and disperse visual fixation points. In decelerating section, drivers slow down and stop the vehicle. The research results provide the basis for safe driving, when date is more than a variable value, which can determine the driver unsafe driving. The research also has an important and practical role in the research of the driver’s behavior process in multi-source information environment.


2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Shoushuo Wang ◽  
Zhigang Du ◽  
Fangtong Jiao ◽  
Libo Yang ◽  
Yudan Ni

This study aims to investigate the impact of the urban undersea tunnel longitudinal slope on the visual characteristics of drivers. 20 drivers were enrolled to conduct the real vehicle test of the urban undersea tunnel. First, the data of average fixation time and visual lobe were collected by an eye tracker. The differential significance was tested using the one-way repeated measures analysis of variance (ANOVA). Then, the difference between the up-and-down slope (direction) factor and the longitudinal slope (percent) factor on the two indexes were analyzed using the two-way repeated measures ANOVA. Second, by constructing a Lorentz model, the impact of the longitudinal slope on the average fixation time and the visual lobe were analyzed. Besides, a three-dimensional model of the longitudinal slope, average fixation time, and visual lobe was quantified. The results showed that the average fixation time and visual lobe under different longitudinal slopes markedly differed when driving on the uphill and downhill sections. The average fixation time and visual lobe under two factors were markedly different. Moreover, with an increase in the longitudinal slope, the average fixation time exhibited a trend of increasing first then decreasing; the visual lobe exhibited a trend of decreasing first and then increasing. The average fixation time reached the minimum and maximum value when the slope was 2.15% and 4.0%, whereas the visual lobe reached the maximum and minimum value when the slope was 2.88% and 4.0%. Overall, the longitudinal slope exerted a great impact on the visual load of the driver.


2020 ◽  
Vol 9 (2) ◽  
pp. 85 ◽  
Author(s):  
David Lamb ◽  
Joni Downs ◽  
Steven Reader

Finding clusters of events is an important task in many spatial analyses. Both confirmatory and exploratory methods exist to accomplish this. Traditional statistical techniques are viewed as confirmatory, or observational, in that researchers are confirming an a priori hypothesis. These methods often fail when applied to newer types of data like moving object data and big data. Moving object data incorporates at least three parts: location, time, and attributes. This paper proposes an improved space-time clustering approach that relies on agglomerative hierarchical clustering to identify groupings in movement data. The approach, i.e., space–time hierarchical clustering, incorporates location, time, and attribute information to identify the groups across a nested structure reflective of a hierarchical interpretation of scale. Simulations are used to understand the effects of different parameters, and to compare against existing clustering methodologies. The approach successfully improves on traditional approaches by allowing flexibility to understand both the spatial and temporal components when applied to data. The method is applied to animal tracking data to identify clusters, or hotspots, of activity within the animal’s home range.


Author(s):  
Kim R. Hammel ◽  
Donald L. Fisher ◽  
Anuj K. Pradhan

Driving simulators and eye tracking technology are increasingly being used to evaluate advanced telematics. Many such evaluations are easily generalizable only if drivers' scanning in the virtual environment is similar to their scanning behavior in real world environments. In this study we developed a virtual driving environment designed to replicate the environmental conditions of a previous, real world experiment (Recarte & Nunes, 2000). Our motive was to compare the data collected under three different cognitive loading conditions in an advanced, fixed-base driving simulator with that collected in the real world. In the study that we report, a head mounted eye tracker recorded eye movement data while participants drove the virtual highway in half-mile segments. There were three loading conditions: no loading, verbal loading and spatial loading. Each of the 24 subjects drove in all three conditions. We found that the patterns that characterized eye movement data collected in the simulator were virtually identical to those that characterized eye movement data collected in the real world. In particular, the number of speedometer checks and the functional field of view significantly decreased in the verbal conditions, with even greater effects for the spatial loading conditions.


2018 ◽  
Author(s):  
Adam P. Morris ◽  
Bart Krekelberg

SummaryHumans and other primates rely on eye movements to explore visual scenes and to track moving objects. As a result, the image that is projected onto the retina – and propagated throughout the visual cortical hierarchy – is almost constantly changing and makes little sense without taking into account the momentary direction of gaze. How is this achieved in the visual system? Here we show that in primary visual cortex (V1), the earliest stage of cortical vision, neural representations carry an embedded “eye tracker” that signals the direction of gaze associated with each image. Using chronically implanted multi-electrode arrays, we recorded the activity of neurons in V1 during tasks requiring fast (exploratory) and slow (pursuit) eye movements. Neurons were stimulated with flickering, full-field luminance noise at all times. As in previous studies 1-4, we observed neurons that were sensitive to gaze direction during fixation, despite comparable stimulation of their receptive fields. We trained a decoder to translate neural activity into metric estimates of (stationary) gaze direction. This decoded signal not only tracked the eye accurately during fixation, but also during fast and slow eye movements, even though the decoder had not been exposed to data from these behavioural states. Moreover, this signal lagged the real eye by approximately the time it took for new visual information to travel from the retina to cortex. Using simulations, we show that this V1 eye position signal could be used to take into account the sensory consequences of eye movements and map the fleeting positions of objects on the retina onto their stable position in the world.


Information ◽  
2019 ◽  
Vol 10 (5) ◽  
pp. 170
Author(s):  
Jian Lv ◽  
Xiaoping Xu ◽  
Ning Ding

Aimed at the problem of how to objectively obtain the threshold of a user’s cognitive load in a virtual reality interactive system, a method for user cognitive load quantification based on an eye movement experiment is proposed. Eye movement data were collected in the virtual reality interaction process by using an eye movement instrument. Taking the number of fixation points, the average fixation duration, the average saccade length, and the number of the first mouse clicking fixation points as the independent variables, and the number of backward-looking times and the value of user cognitive load as the dependent variables, a cognitive load evaluation model was established based on the probabilistic neural network. The model was validated by using eye movement data and subjective cognitive load data. The results show that the absolute error and relative mean square error were 6.52%–16.01% and 6.64%–23.21%, respectively. Therefore, the model is feasible.


2020 ◽  
Vol 2020 (9) ◽  
pp. 39-1-39-7
Author(s):  
Mingming Wang ◽  
Anjali Jogeshwar ◽  
Gabriel J. Diaz ◽  
Jeff B. Pelz ◽  
Susan Farnand

A virtual reality (VR) driving simulation platform has been built for use in addressing multiple research interests. This platform is a VR 3D engine (Unity © ) that provides an immersive driving experience viewed in an HTC Vive © head-mounted display (HMD). To test this platform, we designed a virtual driving scenario based on a real tunnel used by Törnros to perform onroad tests [1] . Data from the platform, including driving speed and lateral lane position, was compared the published on-road tests. The correspondence between the driving simulation and onroad tests is assessed to demonstrate the ability of our platform as a research tool. In addition, the drivers’ eye movement data, such as 3D gaze point of regard (POR), will be collected during the test with an Tobii © eye-tracker integrated in the HMD. The data set will be analyzed offline and examined for correlations with driving behaviors in future study.


Author(s):  
Liang Sun ◽  
Hua Shao ◽  
Shuyang Li ◽  
Xiaoxun Huang ◽  
Wenyan Yang

Beauty estimation is a common method for landscape quality estimation, although it has some limitations. With eye tracker, the visual behaviors of the subjects during the estimation can be recorded. Through the analyses of heat maps, path maps and eye movement data, the psychological changes of the subjects and the underlying law of beauty aesthetic can be understood, which will provide supplementation to beauty estimation. This paper studied the beauty estimation of urban waterfront parks and proofed that the landscape quality estimation method focussing on beauty estimation and assisted by eye movement tracking is feasible. It can improve the objectiveness and accuracy of landscape quality estimation to some extent and provide a comprehensive understanding of the effects and combination law of landscape characteristic elements.


2019 ◽  
Vol 12 (6) ◽  
Author(s):  
Eva Krueger ◽  
Andrea Schneider ◽  
Ben Sawyer ◽  
Alain Chavaillaz ◽  
Andreas Sonderegger ◽  
...  

Understanding our visual world requires both looking and seeing. Dissociation of these processes can result in the phenomenon of inattentional blindness or ‘looking without seeing‘. Concomitant errors in applied settings can be serious, and even deadly. Current visual data analysis cannot differentiate between just ‘looking‘ and actual processing of visual information, i.e., ‘seeing‘. Differentiation may be possible through the examination of microsaccades; the involuntary, small-magnitude saccadic eye movements that occur during processed visual fixation. Recent work has suggested that microsaccades are post-attentional biosignals, potentially modulated by task. Specifically, microsaccade rates decrease with increased mental task demand, and increase with growing visual task difficulty. Such findings imply that there are fundamental differences in microsaccadic activity between visual and nonvisual tasks. To evaluate this proposition, we used a high-speed eye tracker to record participants in looking for differences between two images or, doing mental arithmetic, or both tasks in combination. Results showed that microsaccade rate was significantly increased in conditions that require high visual attention, and decreased in conditions that require less visual attention. The results support microsaccadic rate reflecting visual attention, and level of visual information processing. A measure that reflects to what extent and how an operator is processing visual information represents a critical step for the application of sophisticated visual assessment to real world tasks.


2021 ◽  
Vol 14 (3) ◽  
Author(s):  
Daria Ivanchenko ◽  
Katharina Rifai ◽  
Ziad M. Hafed ◽  
Frank Schaeffel

We describe a high-performance, pupil-based binocular eye tracker that approaches the performance of a well-established commercial system, but at a fraction of the cost. The eye tracker is built from standard hardware components, and its software (written in Visual C++) can be easily implemented. Because of its fast and simple linear calibration scheme, the eye tracker performs best in the central 10 degrees of the visual field. The eye tracker possesses a number of useful features: (1) automated calibration simultaneously in both eyes while subjects fixate four fixation points sequentially on a computer screen, (2) automated real-time continuous analysis of measurement noise, (3) automated blink detection, (4) and real-time analysis of pupil centration artifacts. This last feature is critical because it is known that pupil diameter changes can be erroneously registered by pupil-based trackers as a change in eye position. We evaluated the performance of our system against that of a well-established commercial system using simultaneous measurements in 10 participants. We propose our low-cost eye tracker as a promising resource for studies of binocular eye movements.


Sign in / Sign up

Export Citation Format

Share Document