scholarly journals Levels of What?Investigating Drivers’ Understanding of Different Levels of Automation in Vehicles

Author(s):  
Fjollë Novakazi ◽  
Mikael Johansson ◽  
Helena Strömberg ◽  
MariAnne Karlsson

Extant levels of automation (LoAs) taxonomies describe variations in function allocations between the driver and the driving automation system (DAS) from a technical perspective. However, these taxonomies miss important human factors issues and when design decisions are based on them, the resulting interaction design leaves users confused. Therefore, the aim of this paper is to describe how users perceive different DASs by eliciting insights from an empirical driving study facilitating a Wizard-of-Oz approach, where 20 participants were interviewed after experiencing systems on two different LoAs under real driving conditions. The findings show that participants talked about the DAS by describing different relationships and dependencies between three different elements: the context (traffic conditions, road types), the vehicle (abilities, limitations, vehicle operations), and the driver (control, attentional demand, interaction with displays and controls, operation of vehicle), each with associated aspects that indicate what users identify as relevant when describing a vehicle with automated systems. Based on these findings, a conceptual model is proposed by which designers can differentiate LoAs from a human-centric perspective and that can aid in the development of design guidelines for driving automation.

2020 ◽  
Vol 0 (0) ◽  
Author(s):  
Fjollë Novakazi ◽  
Mikael Johansson ◽  
Gustav Erhardsson ◽  
Linnéa Lidander

Abstract Fully automated drive still lies far ahead in the future. Therefore, vehicles with multiple modes of operation will not disappear fully as many road types, traffic and weather conditions will not allow fully automated drive. Instead, fragmented trips with regard to automation will prevail, where drivers will have different levels of automation available at different times. Given this scenario and the complexity of vehicles offering multiple levels of automation with different driving modes depending on prevailing conditions, the need for drivers to understand their responsibility during the different modes becomes critical. The aim of this paper is to contribute to further understanding of how perceived control influences the driver’s mode awareness of and responsibility for the driving task by reporting on an on-road Wizard-of-Oz study under real driving conditions. The results show that when confronted with a vehicle offering both a level 2 and a level 4 driving automation system, drivers have difficulty in determining whether control is allocated to them or to the system. Further results show that perceived control and responsibility for the driving task are closely linked, and that the driver’s perception of the driving system influence how they interact with it. Finally, conclusions are drawn regarding the way perceived control influences mode awareness when interacting with a vehicle that features multiple levels of automation.


2017 ◽  
Vol 12 (1) ◽  
pp. 42-49 ◽  
Author(s):  
Greg A. Jamieson ◽  
Gyrd Skraaning

This paper responds to Kaber’s reflections on the empirical grounding and design utility of the levels-of-automation (LOA) framework. We discuss the suitability of the existing human performance data for supporting design decisions in complex work environments. We question why human factors design guidance seems wedded to a model of questionable predictive value. We challenge the belief that LOA frameworks offer useful input to the design and operation of highly automated systems. Finally, we seek to expand the design space for human–automation interaction beyond the familiar human factors constructs. Taken together, our positions paint LOA frameworks as abstractions suffering a crisis of confidence that Kaber’s remedies cannot restore.


Author(s):  
Emilie Roth ◽  
Beth Depass ◽  
Jonathan Harter ◽  
Ronald Scott ◽  
Jeffrey Wampler

There is growing recognition of a need to go beyond levels of automation frameworks to provide more detailed guidance for design of effective human automation interaction (HAI). Here we present some design questions that are important for designers of HAI to address as they develop the requirements for the software architecture and user interfaces for automated aids. This set of guiding questions has grown out of our experience in developing a series of successful collaborative automation systems for airlift planning and scheduling. We illustrate through examples how answers to these high-level questions helped inform the HAI design decisions we confronted. The set of questions are offered in an attempt to broaden the discussion of how best to provide guidance to system developers confronted with HAI design challenges.


2021 ◽  
Author(s):  
Valerian Chambon

Repeated interactions with automated systems are known to affect how agents experience their own actions and choices. The present study explores the possibility of partially restoring sense of agency in operators interacting with automated systems by providing additional information about how and why these systems make decision. To do so, we implemented an obstacle avoidance task with different levels of automation and explicability. Levels of automation were varied by implementing conditions in which the participant was free or not free to choose which direction to take, whereas levels of explicability were varied by providing or not providing the participant with the system’s confidence in the direction to take. We first assessed how automation and explicability interacted with participants' sense of agency, and then tested whether increased self-agency was systematically associated with greater confidence in the decision and improved system acceptability. The results showed an overall positive effect of system assistance. Providing additional information about the system’s decision (explicability effect) and reducing the cognitive load associated with the decision itself (automation effect) was associated with stronger sense of agency, greater confidence in the decision, and better performance. In addition to the positive effects of system assistance, acceptability scores revealed that participants perceived “explicable” systems more favorably. These results highlight the potential value of studying self-agency in human-machine interaction as a guideline for making automation technologies more acceptable and, ultimately, improving the usefulness of these technologies.


2021 ◽  
Author(s):  
S.E. Bebinov ◽  
O.N. Krivoshchekova ◽  
A.V. Nechaev

The research was carried out on two independent experimental groups of boys and girls. The first was observed in traffic conditions, the second during the period of auto-simulator training. The HRV indices were determined: HR - heart rate, IN - index of tension of regulatory systems, AMo - amplitude of the mode, LF/HF - index of vagosympathetic interaction. A pronounced sympathetic reaction of more prepared cadets to the training load with the subsequent restoration of the studied characteristics was revealed. Key words: heart rate variability, autonomic regulation, vagosympathetic interaction, driver training, level of preparedness.


Author(s):  
Neville Moray ◽  
Toshiuki Inagaki ◽  
Makoto Itoh

Sheridan's “Levels of Automation” were explored in an experiment on fault management of a continuous process control task which included situation adaptive automation. Levels of automation with more or less automation autonomy, and different levels of advice to the operator were compared, with automatic diagnosis whose reliability varied. The efficiency of process control and of fault management were explored under human control and automation in fault management, and aspects of the task in which human or automation were the more efficient defined. The results are related to earlier work on trust and self confidence in allocation of function by Lee, Moray, and Muir.


Author(s):  
Wyatt McManus ◽  
Jing Chen

Modern surface transportation vehicles often include different levels of automation. Higher automation levels have the potential to impact surface transportation in unforeseen ways. For example, connected vehicles with higher levels of automation are at a higher risk for hacking attempts, because automated driving assistance systems often rely on onboard sensors and internet connectivity (Amoozadeh et al., 2015). As the automation level of vehicle control rises, it is necessary to examine the effect different levels of automation have on the driver-vehicle interactions. While research into the effect of automation level on driver-vehicle interactions is growing, research into how automation level affects driver’s responses to vehicle hacking attempts is very limited. In addition, auditory warnings have been shown to effectively attract a driver’s attention while performing a driving task, which is often visually demanding (Baldwin, 2011; Petermeijer, Doubek, & de Winter, 2017). An auditory warning can be either speech-based containing sematic information (e.g., “car in blind spot”) or non-sematic (e.g., a tone, or an earcon), which can influence driver behaviors differently (Sabic, Mishler, Chen, & Hu, 2017). The purpose of the current study was to examine the effect of level of automation and warning type on driver responses to novel critical events, using vehicle hacking attempts as a concrete example, in a driving simulator. The current study compared how level of automation (manual vs. automated) and warning type (non-semantic vs. semantic) affected drivers’ responses to a vehicle hacking attempt using time to collision (TTC) values, maximum steering wheel angle, number of successful responses, and other measures of response. A full factorial between-subjects design with the two factors made four conditions (Manual Semantic, Manual Non-Semantic, Automated Semantic, and Automated Non-Semantic). Seventy-two participants recruited using SONA ( odupsychology.sona-systems.com ) completed two simulated drives to school in a driving simulator. The first drive ended with the participant safely arriving at school. A two-second warning was presented to the participants three quarters of the way through the second drive and was immediately followed by a simulated vehicle hacking attempt. The warning either stated “Danger, hacking attempt incoming” in the semantic conditions or was a 500 Hz sine tone in the non-semantic conditions. The hacking attempt lasted five seconds before simulating a crash into a vehicle and ending the simulation if no intervention by the driver occurred. Our results revealed no significant effect of level of automation or warning type on TTC or successful response rate. However, there was a significant effect of level of automation on maximum steering wheel angle. This is a measure of response quality (Shen & Neyens, 2017), such that manual drivers had safer responses to the hacking attempt with smaller maximum steering wheel angles. In addition, an effect of warning type that approached significance was also found for maximum steering wheel angle such that participants who received a semantic warning had more severe and dangerous responses to the hacking attempt. The TTC and successful response results from the current experiment do not match those in the previous literature. The null results were potentially due to the warning implementation time and the complexity of the vehicle hacking attempt. In contrast, the maximum steering wheel angle results indicated that level of automation and warning type affected the safety and severity of the participants’ responses to the vehicle hacking attempt. This suggests that both factors may influence responses to hacking attempts in some capacity. Further research will be required to determine if level of automation and warning type affect participants ability to safely respond to vehicle hacking attempts. Acknowledgments. We are grateful to Scott Mishler for his assistance with STISIM programming and Faye Wakefield, Hannah Smith, and Pettie Perkins for their assistance in data collection.


Sign in / Sign up

Export Citation Format

Share Document