scholarly journals Evaluation of Multimodal External Human–Machine Interface for Driverless Vehicles in Virtual Reality

Symmetry ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 687
Author(s):  
Jinzhen Dou ◽  
Shanguang Chen ◽  
Zhi Tang ◽  
Chang Xu ◽  
Chengqi Xue

With the development and promotion of driverless technology, researchers are focusing on designing varied types of external interfaces to induce trust in road users towards this new technology. In this paper, we investigated the effectiveness of a multimodal external human–machine interface (eHMI) for driverless vehicles in virtual environment, focusing on a two-way road scenario. Three phases of identifying, decelerating, and parking were taken into account in the driverless vehicles to pedestrian interaction process. Twelve eHMIs are proposed, which consist of three visual features (smile, arrow and none), three audible features (human voice, warning sound and none) and two physical features (yielding and not yielding). We conducted a study to gain a more efficient and safer eHMI for driverless vehicles when they interact with pedestrians. Based on study outcomes, in the case of yielding, the interaction efficiency and pedestrian safety in multimodal eHMI design was satisfactory compared to the single-modal system. The visual modality in the eHMI of driverless vehicles has the greatest impact on pedestrian safety. In addition, the “arrow” was more intuitive to identify than the “smile” in terms of visual modality.

2021 ◽  
Vol 5 (11) ◽  
pp. 69
Author(s):  
Jana Fank ◽  
Christian Knies ◽  
Frank Diermeyer

Cooperation between road users based on V2X communication has the potential to make road traffic safer and more efficient. The exchange of information enables the cooperative orchestration of critical traffic situations, such as truck overtaking maneuvers on freeways. With the benefit of such a system, questions arise concerning system failure or the abrupt and unexpected behavior of road users. A human-machine interface (HMI) organizes and negotiates the cooperation between drivers and maintains smooth interaction, trust, and system acceptance, even in the case of a possible system failure. A study was conducted with 30 truck drivers on a dynamic truck driving simulator to analyze the negotiation of cooperation requests and the reaction of truck drivers to potential system failures. The results show that an automated cooperation request does not translate into a significantly higher cooperation success rate. System failures in cooperative truck passing maneuvers are not considered critical by truck drivers in this simulated environment. The next step in the development process is to investigate how the success rate of truck overtaking maneuvers on freeways can be further increased as well as the implementation of the system in a real vehicle to investigate the reaction behavior of truck drivers in case of system failures in a real environment.


Information ◽  
2020 ◽  
Vol 11 (7) ◽  
pp. 346 ◽  
Author(s):  
Michael Rettenmaier ◽  
Jonas Schulze ◽  
Klaus Bengler

The communication of an automated vehicle (AV) with human road users can be realized by means of an external human–machine interface (eHMI), such as displays mounted on the AV’s surface. For this purpose, the amount of time needed for a human interaction partner to perceive the AV’s message and to act accordingly has to be taken into account. Any message displayed by an AV must satisfy minimum size requirements based on the dynamics of the road traffic and the time required by the human. This paper examines the size requirements of displayed text or symbols for ensuring the legibility of a message. Based on the limitations of available package space in current vehicle models and the ergonomic requirements of the interface design, an eHMI prototype was developed. A study involving 30 participants varied the content type (text and symbols) and content color (white, red, green) in a repeated measures design. We investigated the influence of content type on content size to ensure legibility from a constant distance. We also analyzed the influence of content type and content color on the human detection range. The results show that, at a fixed distance, text has to be larger than symbols in order to maintain legibility. Moreover, symbols can be discerned from a greater distance than text. Color had no content overlapping effect on the human detection range. In order to ensure the maximum possible detection range among human road users, an AV should display symbols rather than text. Additionally, the symbols could be color-coded for better message comprehension without affecting the human detection range.


1990 ◽  
Author(s):  
B. Bly ◽  
P. J. Price ◽  
S. Park ◽  
S. Tepper ◽  
E. Jackson ◽  
...  

Author(s):  
Saverio Trotta ◽  
Dave Weber ◽  
Reinhard W. Jungmaier ◽  
Ashutosh Baheti ◽  
Jaime Lien ◽  
...  

Procedia CIRP ◽  
2021 ◽  
Vol 100 ◽  
pp. 488-493
Author(s):  
Florian Beuss ◽  
Frederik Schmatz ◽  
Marten Stepputat ◽  
Fabian Nokodian ◽  
Wilko Fluegge ◽  
...  

Nanoscale ◽  
2021 ◽  
Author(s):  
Qiufan Wang ◽  
Jiaheng Liu ◽  
Guofu Tian ◽  
Daohong Zhang

The rapid development of human-machine interface and artificial intelligence is dependent on flexible and wearable soft devices such as sensors and energy storage systems. One of the key factors for...


2021 ◽  
Vol 13 (8) ◽  
pp. 188
Author(s):  
Marianna Di Gregorio ◽  
Marco Romano ◽  
Monica Sebillo ◽  
Giuliana Vitiello ◽  
Angela Vozella

The use of Unmanned Aerial Systems, commonly called drones, is growing enormously today. Applications that can benefit from the use of fleets of drones and a related human–machine interface are emerging to ensure better performance and reliability. In particular, a fleet of drones can become a valuable tool for monitoring a wide area and transmitting relevant information to the ground control station. We present a human–machine interface for a Ground Control Station used to remotely operate a fleet of drones, in a collaborative setting, by a team of multiple operators. In such a collaborative setting, a major interface design challenge has been to maximize the Team Situation Awareness, shifting the focus from the individual operator to the entire group decision-makers. We were especially interested in testing the hypothesis that shared displays may improve the team situation awareness and hence the overall performance. The experimental study we present shows that there is no difference in performance between shared and non-shared displays. However, in trials when unexpected events occurred, teams using shared displays-maintained good performance whereas in teams using non-shared displays performance reduced. In particular, in case of unexpected situations, operators are able to safely bring more drones home, maintaining a higher level of team situational awareness.


Sign in / Sign up

Export Citation Format

Share Document