Situation awareness in remote operators of autonomous vehicles: Developing a taxonomy of situation awareness in video-relays of driving scenes

2021 ◽  
Author(s):  
clare mutzenich ◽  
Szonya Durant ◽  
Shaun Helman ◽  
Polly Dalton

Even entirely driverless vehicles will sometimes require remote human intervention. Existing SA frameworks do not acknowledge the significant human factors challenges unique to a driver in charge of a vehicle that they are not physically occupying. Remote operators will have to build up a mental model of the remote environment facilitated by monitor view and video feed. We took a novel approach to 'freeze and probe' techniques to measure SA, employing a qualitative verbal elicitation task to uncover what people ‘see’ in a remote scene when they are not constrained by rigid questioning. Participants (n=10) watched eight videos of driving scenes randomised and counterbalanced across four road types (motorway, rural, residential and A road). Participants recorded spoken descriptions when each video stopped, detailing what was happening (comprehension) and what could happen next (prediction). Participant transcripts provided a rich catalogue of verbal data reflecting clear interactions between different SA levels. This suggests that acquiring SA in remote scenes is a flexible and fluctuating process of combining comprehension and prediction globally rather than serially, in contrast to what has sometimes been implied by previous SA methodologies (Endsley, 2000; Endsley, 2017; Jones & Endsley, 1996). Participants’ responses were also categorised to form a ‘Taxonomy of SA’ aimed at capturing the key elements of people’s reported SA for videos of driving situations. We suggest that existing theories of SA need to be more sensitively applied to remote driving contexts such as remote operators of autonomous vehicles.

2021 ◽  
Vol 12 ◽  
Author(s):  
Clare Mutzenich ◽  
Szonya Durant ◽  
Shaun Helman ◽  
Polly Dalton

Even entirely driverless vehicles will sometimes require remote human intervention. Existing SA frameworks do not acknowledge the significant human factors challenges unique to a driver in charge of a vehicle that they are not physically occupying. Remote operators will have to build up a mental model of the remote environment facilitated by monitor view and video feed. We took a novel approach to “freeze and probe” techniques to measure SA, employing a qualitative verbal elicitation task to uncover what people “see” in a remote scene when they are not constrained by rigid questioning. Participants (n = 10) watched eight videos of driving scenes randomized and counterbalanced across four road types (motorway, rural, residential and A road). Participants recorded spoken descriptions when each video stopped, detailing what was happening (SA Comprehension) and what could happen next (SA Prediction). Participant transcripts provided a rich catalog of verbal data reflecting clear interactions between different SA levels. This suggests that acquiring SA in remote scenes is a flexible and fluctuating process of combining comprehension and prediction globally rather than serially, in contrast to what has sometimes been implied by previous SA methodologies (Jones and Endsley, 1996; Endsley, 2000, 2017b). Inductive thematic analysis was used to categorize participants’ responses into a taxonomy aimed at capturing the key elements of people’s reported SA for videos of driving situations. We suggest that existing theories of SA need to be more sensitively applied to remote driving contexts such as remote operators of autonomous vehicles.


2021 ◽  
Vol 11 (21) ◽  
pp. 9799
Author(s):  
Syed Qamar Zulqarnain ◽  
Sanghwan Lee

These days, autonomous vehicles (AVs) technology has been improved dramatically. However, even though the AVs require no human intervention in most situations, AVs may fail in certain situations. In such cases, it is desirable that humans can operate the vehicle manually to recover from a failure situation through remote driving. Furthermore, we believe that remote driving can enhance the current transportation system in various ways. In this paper, we consider a revolutionary transportation platform, where all the vehicles in an area are controlled by some remote controllers or drivers so that transportation can be performed in a more efficient way. For example, road capacity can be effectively utilized and fuel efficiency can be increased by centralized remote control. However, one of the biggest challenges in such remote driving is the communication latency between the remote driver and the vehicle. Thus, selecting appropriate locations of the remote drivers is very important to avoid any type of safety problem that might happen due to large communication latency. Furthermore, the selection should reflect the traffic situation created by multiple vehicles in an area. To tackle these challenges, in this paper, we propose several algorithms that select remote drivers’ locations for a given transportation schedules of multiple vehicles. We consider two objectives in this system and evaluate the performance of the proposed algorithms through simulations. The results show that the proposed algorithms perform better than some baseline algorithms.


2021 ◽  
Vol 13 (13) ◽  
pp. 2643
Author(s):  
Dário Pedro ◽  
João P. Matos-Carvalho ◽  
José M. Fonseca ◽  
André Mora

Unmanned Autonomous Vehicles (UAV), while not a recent invention, have recently acquired a prominent position in many industries, and they are increasingly used not only by avid customers, but also in high-demand technical use-cases, and will have a significant societal effect in the coming years. However, the use of UAVs is fraught with significant safety threats, such as collisions with dynamic obstacles (other UAVs, birds, or randomly thrown objects). This research focuses on a safety problem that is often overlooked due to a lack of technology and solutions to address it: collisions with non-stationary objects. A novel approach is described that employs deep learning techniques to solve the computationally intensive problem of real-time collision avoidance with dynamic objects using off-the-shelf commercial vision sensors. The suggested approach’s viability was corroborated by multiple experiments, firstly in simulation, and afterward in a concrete real-world case, that consists of dodging a thrown ball. A novel video dataset was created and made available for this purpose, and transfer learning was also tested, with positive results.


2021 ◽  
Author(s):  
Khalil Khaska ◽  
Dániel Miletics

AbstractNowadays, self-driving cars have a wide reputation among people that is constantly increasing, many manufacturers are developing their own autonomous vehicles. These vehicles are equipped with various sensors that are placed at several points in the car. These sensors provide information to control the vehicle (partially or completely, depending on the automation level). Sight distances on roads are defined according to various traffic situations (stopping, overtaking, crossing, etc.). Safety reasons require these sight distances, which are calculated from human factors (e.g., reaction time), vehicle characteristics (e.g., eye position, brakes), road surface properties, and other factors. Autodesk Civil 3D is a widely used tool in the field of road design, the software however was developed based on the characteristics of the human drivers and conventional vehicles.


Author(s):  
Trent W. Victor ◽  
Emma Tivesten ◽  
Pär Gustavsson ◽  
Joel Johansson ◽  
Fredrik Sangberg ◽  
...  

Objective: The aim of this study was to understand how to secure driver supervision engagement and conflict intervention performance while using highly reliable (but not perfect) automation. Background: Securing driver engagement—by mitigating irony of automation (i.e., the better the automation, the less attention drivers will pay to traffic and the system, and the less capable they will be to resume control) and by communicating system limitations to avoid mental model misconceptions—is a major challenge in the human factors literature. Method: One hundred six drivers participated in three test-track experiments in which we studied driver intervention response to conflicts after driving highly reliable but supervised automation. After 30 min, a conflict occurred wherein the lead vehicle cut out of lane to reveal a conflict object in the form of either a stationary car or a garbage bag. Results: Supervision reminders effectively maintained drivers’ eyes on path and hands on wheel. However, neither these reminders nor explicit instructions on system limitations and supervision responsibilities prevented 28% (21/76) of drivers from crashing with their eyes on the conflict object (car or bag). Conclusion: The results uncover the important role of expectation mismatches, showing that a key component of driver engagement is cognitive (understanding the need for action), rather than purely visual (looking at the threat), or having hands on wheel. Application: Automation needs to be designed either so that it does not rely on the driver or so that the driver unmistakably understands that it is an assistance system that needs an active driver to lead and share control.


2019 ◽  
Vol 26 (1) ◽  
pp. e100081 ◽  
Author(s):  
Mark Sujan ◽  
Dominic Furniss ◽  
Kath Grundy ◽  
Howard Grundy ◽  
David Nelson ◽  
...  

The use of artificial intelligence (AI) in patient care can offer significant benefits. However, there is a lack of independent evaluation considering AI in use. The paper argues that consideration should be given to how AI will be incorporated into clinical processes and services. Human factors challenges that are likely to arise at this level include cognitive aspects (automation bias and human performance), handover and communication between clinicians and AI systems, situation awareness and the impact on the interaction with patients. Human factors research should accompany the development of AI from the outset.


2020 ◽  
Vol 2020 ◽  
pp. 1-15
Author(s):  
Benjamin Vedder ◽  
Bo Joel Svensson ◽  
Jonny Vinter ◽  
Magnus Jonsson

Autonomous vehicles need accurate and dependable positioning, and these systems need to be tested extensively. We have evaluated positioning based on ultrawideband (UWB) ranging with our self-driving model car using a highly automated approach. Random drivable trajectories were generated, while the UWB position was compared against the Real-Time Kinematic Satellite Navigation (RTK-SN) positioning system which our model car also is equipped with. Fault injection was used to study the fault tolerance of the UWB positioning system. Addressed challenges are automatically generating test cases for real-time hardware, restoring the state between tests, and maintaining safety by preventing collisions. We were able to automatically generate and carry out hundreds of experiments on the model car in real time and rerun them consistently with and without fault injection enabled. Thereby, we demonstrate one novel approach to perform automated testing on complex real-time hardware.


Author(s):  
Andre Garcia ◽  
Neil Ganey ◽  
Jeff Wilbert

Technology Readiness Levels (TRL) are a framework, originally created by NASA and later adopted and tailored by the US Department of Defense (Graettinger, Garcia, Siviy, Schenk, Van Syckle, 2002) to track the progress and maturity of a given technology. There are a number of derivative readiness level frameworks that have spun off the original TRL framework such as System Readiness Levels, Software Readiness Levels, Integration Readiness Levels, and Manufacturing Readiness Levels, just to name a few. Most of the time, these frameworks have an associated readiness assessment used to identify or assess the precise readiness level status. Human Readiness Levels (HRLs) are a framework used to identify the level of readiness or maturity of a given technology as it relates to its usability and its refinement to be used by a human(s) (Phillips, 2010). There are a number of HRL frameworks or similar (e.g. Human Factors Readiness Levels), yet little attention has been paid to Human Readiness Assessments (HRAs). The purpose of this paper is to review the literature of Human Readiness Levels and introduce a new multivariate Human Readiness Assessment that emphasizes workload, situation awareness (SA), and usability.


Sign in / Sign up

Export Citation Format

Share Document