scholarly journals Effectiveness Analysis on Human-Machine Information Interaction of Intelligent Highway

2022 ◽  
Vol 2022 ◽  
pp. 1-9
Author(s):  
Yicheng Zhou ◽  
Tuo Sun ◽  
Shunzhi Wen ◽  
Hao Zhong ◽  
Youkai Cui ◽  
...  

Different human-machine collaboration modes and driving simulation tests with the orthogonal method considered are designed for a series of typical intelligent highway landscapes. The feedback of drivers under different interaction modes is evaluated through NASA-LTX questionnaire, driving simulator, eye tracker, and electroencephalograph (EEG). This optimal interaction mode (including voice form, broadcasting timing, and frequency) of each driving assistance scene in CVI (Cooperative Vehicle Infrastructure) environment under the conditions of high and low traffic is determined from subjective and objective perspectives. In accordance with feedback of these subjects on each set scene, the voice information structure of each assistance mode plays the most important role on drivers followed by the broadcasting timing and frequency. These broadcasts which provide good effects include scenarios such as various assistance scenes at curves and an early warning timing at a long-distance trip as well as a high early warning frequency; in addition, as for an exit-tip assistance scenario, a voice mode assistance is preferred; and for various speed assistance scenes, the beep mode is better. Furthermore, it is found that, at a higher traffic level but a short-distance trip, an early warning timing is favored generally for various scenes while under a low traffic level, a long-distance early warning timing is better.

Author(s):  
Colleen Serafin ◽  
Cathy Wen ◽  
Gretchen Paelke ◽  
Paul Green

This paper describes an experiment that examined the effect of car phone design on simulated driving and dialing performance. The results were used to help develop an easy to use car phone interface and to provide task times as input for a human performance model. Twelve drivers (six under 35 years, six over 60 years) participated in a laboratory experiment in which they operated a simple driving simulator and used a car phone. The phone was either manually dialed or voice-operated and the associated display was either mounted on the instrument panel (IP) or a simulated head-up display (HUD). The phone numbers dialed were either local (7 digits) or long distance (11 digits), and could be familiar (memorized before the experiment) or unfamiliar to the subject. Four tasks were performed after dialing a phone number; two of the tasks were fairly ordinary (listening, talking) and two required some mental processing (loose ends, listing). In terms of driving performance, dialing while driving resulted in greater lane deviation (16.8 cm) than performing a task while driving (13.2 cm). In addition, the voice-operated phone resulted in better driving performance (14.5 cm) than the manual phone (15.5 cm) using either the IP display or HUD. In terms of dialing performance, older drivers dialed 11-digit numbers faster using the voice phone (12.8 seconds) than the manual phone (19.6 seconds). Dialing performance was also affected by the familiarity of numbers. Dialing unfamiliar numbers using the voice phone was faster (9.7 seconds) than using the manual phone (13.0 seconds) and 7-digit unfamiliar numbers were dialed faster (8.2 seconds) than 11-digit unfamiliar numbers (14.5 seconds). Thus, the voice-operated design appears to be an effective way of improving the safety and performance of car phone use, but the location of the display is not important.


2020 ◽  
Author(s):  
Tyron Louw ◽  
Rafael Goncalves ◽  
Guilhermina Torrao ◽  
Vishnu Radhakrishnan ◽  
Wei Lyu ◽  
...  

There is evidence that drivers’ behaviour adapts after using different advanced driving assistance systems. For instance, drivers’ headway during car-following reduces after using adaptive cruise control. However, little is known about whether, and how, drivers’ behaviour will change if they experience automated car-following, and how this is affected by engagement in non-driving related tasks (NDRT). The aim of this driving simulator study, conducted as part of the H2020 L3Pilot project, was to address this topic. We also investigated the effect of the presence of a lead vehicle during the resumption of control, on subsequent manual driving behaviour. Thirty-two participants were divided into two experimental groups. During automated car-following, one group was engaged in an NDRT (SAE Level 3), while the other group was free to look around the road environment (SAE Level 2). Both groups were exposed to Long (1.5 s) and Short (.5 s) Time Headway (THW) conditions during automated car-following, and resumed control both with and without a lead vehicle. All post-automation manual drives were compared to a Baseline Manual Drive, which was recorded at the start of the experiment. Drivers in both groups significantly reduced their time headway in all post-automation drives, compared to a Baseline Manual Drive. There was a greater reduction in THW after drivers resumed control in the presence of a lead vehicle, and also after they had experienced a shorter THW during automated car following. However, whether drivers were in L2 or L3 did not appear to influence the change in mean THW. Subjective feedback suggests that drivers appeared not to be aware of the changes to their driving behaviour, but preferred longer THWs in automation. Our results suggest that automated driving systems should adopt longer THWs in car-following situations, since drivers’ behavioural adaptation may lead to adoption of unsafe headways after resumption of control.


Author(s):  
Yalda Ebadi ◽  
Ganesh Pai ◽  
Siby Samuel ◽  
Donald L. Fisher

Vehicle–bicycle collisions are increasing alarmingly. A recent study shows that cognitively distracted drivers who are glancing on the forward roadway are also less likely to glance toward areas for potential vehicle–bicyclist conflicts. But this study did not determine whether cognitively distracted drivers who did glance toward the appropriate area were as likely to process the information as drivers who were not cognitively distracted. Evidence that drivers who were cognitively distracted and glanced toward the bicyclist were not as likely to process the information could be inferred either from shorter fixations in the area where a bicyclist could appear or from smaller reductions in the speed of their vehicle to mitigate a potential conflict. This study intends to add to previous results by examining only glance and vehicle behaviors of participants who glance toward the latent hazardous events involving bicyclists. Specifically, the durations of the glances toward the latent hazardous events of participants who are and are not cognitively distracted are compared as well as their velocity while approaching the potential strike zones. Two groups of 20 participants (one distracted, one not distracted) each drove through seven scenarios on a fixed-based driving simulator while their eye movements were continuously tracked using an eye tracker. Analysis of the participants’ longest glance duration toward the latent hazardous events indicated that distracted drivers made shorter glances toward the latent hazardous events when compared with their non-distracted counterparts. However, there was no difference in vehicle velocity between distracted and non-distracted drivers near the potential strike zones.


Author(s):  
Donald L. Fisher ◽  
Anuj K. Pradhan ◽  
Alexander Pollatsek ◽  
Michael A. Knodler

2015 ◽  
Vol 53 (1) ◽  
pp. 147-179
Author(s):  
NURIT MELNIK

This paper focuses on the interaction between raising, subject–verb inversion and agreement in Modern Hebrew. It identifies, alongside ‘standard’ (i.e., English-like) subject-to-subject raising, two additional patterns where the embedded subject appears post-verbally. In one, the raising predicate exhibits long-distance agreement with the embedded subject, while in the other, a colloquial variant, it is marked with impersonal (3sm) agreement. The choice between the three raising constructions in the language is shown to be solely dependent on properties of the embedded clause. The data are discussed and analyzed against a background of typological and theoretical work on raising. The analysis, cast in the framework of Head-driven Phrase Structure Grammar (HPSG), builds on research on raising, selectional locality, agreement, subjecthood and information structure, as well as verb-initial constructions in Modern Hebrew.


Author(s):  
Yalda Ebadi ◽  
Ganesh Pai Mangalore ◽  
Siby Samuel

Overall, the rate of vehicle-bicycle collisions is continually increasing. In the United States alone, bicyclist fatalities contributed to 2.3 percent of all crash related fatalities in 2015. In most of these cases, crashes occur due to distracted drivers who are unable to correctly anticipate the bicyclists at the hazardous locations on the roadways such as, intersections and curves. The objective of the current study is to contribute to the divisive literature surrounding cell phone use while driving by specifically measuring, the effects of a secondary mock cell phone task on hazard anticipation performance across common vehicle-bicycle conflict situations. Two groups of 20 drivers each, navigated seven unique scenarios on a driving simulator while being monitored by an eye tracker. One group of participants performed a hands free mock cellphone task while driving, while the second group drove without any additional tasks outside of the primary task of driving. Analysis of the proportion of anticipatory glances using a logistic regression model revealed a significant main effect of the mock cellphone task at reducing the proportion of such glances made by the drivers towards potential bicyclist threats on the roadway.


Author(s):  
Kim R. Hammel ◽  
Donald L. Fisher ◽  
Anuj K. Pradhan

Driving simulators and eye tracking technology are increasingly being used to evaluate advanced telematics. Many such evaluations are easily generalizable only if drivers' scanning in the virtual environment is similar to their scanning behavior in real world environments. In this study we developed a virtual driving environment designed to replicate the environmental conditions of a previous, real world experiment (Recarte & Nunes, 2000). Our motive was to compare the data collected under three different cognitive loading conditions in an advanced, fixed-base driving simulator with that collected in the real world. In the study that we report, a head mounted eye tracker recorded eye movement data while participants drove the virtual highway in half-mile segments. There were three loading conditions: no loading, verbal loading and spatial loading. Each of the 24 subjects drove in all three conditions. We found that the patterns that characterized eye movement data collected in the simulator were virtually identical to those that characterized eye movement data collected in the real world. In particular, the number of speedometer checks and the functional field of view significantly decreased in the verbal conditions, with even greater effects for the spatial loading conditions.


Electronics ◽  
2021 ◽  
Vol 10 (19) ◽  
pp. 2405
Author(s):  
Heung-Gu Lee ◽  
Dong-Hyun Kang ◽  
Deok-Hwan Kim

Currently, the existing vehicle-centric semi-autonomous driving modules do not consider the driver’s situation and emotions. In an autonomous driving environment, when changing to manual driving, human–machine interface and advanced driver assistance systems (ADAS) are essential to assist vehicle driving. This study proposes a human–machine interface that considers the driver’s situation and emotions to enhance the ADAS. A 1D convolutional neural network model based on multimodal bio-signals is used and applied to control semi-autonomous vehicles. The possibility of semi-autonomous driving is confirmed by classifying four driving scenarios and controlling the speed of the vehicle. In the experiment, by using a driving simulator and hardware-in-the-loop simulation equipment, we confirm that the response speed of the driving assistance system is 351.75 ms and the system recognizes four scenarios and eight emotions through bio-signal data.


Sign in / Sign up

Export Citation Format

Share Document