scholarly journals Human Autonomy in Future Drone Traffic: Joint Human–AI Control in Temporal Cognitive Work

2021 ◽  
Vol 4 ◽  
Author(s):  
Jonas Lundberg ◽  
Mattias Arvola ◽  
Karljohan Lundin Palmerius

The roles of human operators are changing due to increased intelligence and autonomy of computer systems. Humans will interact with systems at a more overarching level or only in specific situations. This involves learning new practices and changing habitual ways of thinking and acting, including reconsidering human autonomy in relation to autonomous systems. This paper describes a design case of a future autonomous management system for drone traffic in cities in a key scenario we call The Computer in Brussels. Our approach to designing for human collaboration with autonomous systems builds on scenario-based design and cognitive work analysis facilitated by computer simulations. We use a temporal method, called the Joint Control Framework to describe human and automated work in an abstraction hierarchy labeled Levels of Autonomy in Cognitive Control. We use the Score notation to analyze patterns of temporal developments that span levels of the abstraction hierarchy and discuss implications for human-automation communication in traffic management. We discuss how autonomy at a lower level can prevent autonomy on higher levels, and vice versa. We also discuss the temporal nature of autonomy in minute-to-minute operative work. Our conclusion is that human autonomy in relation to autonomous systems is based on fundamental trade-offs between technological opportunities to automate and values of what human actors find meaningful.

Entropy ◽  
2020 ◽  
Vol 22 (11) ◽  
pp. 1227
Author(s):  
William F. Lawless

As humanity grapples with the concept of autonomy for human–machine teams (A-HMTs), unresolved is the necessity for the control of autonomy that instills trust. For non-autonomous systems in states with a high degree of certainty, rational approaches exist to solve, model or control stable interactions; e.g., game theory, scale-free network theory, multi-agent systems, drone swarms. As an example, guided by artificial intelligence (AI, including machine learning, ML) or by human operators, swarms of drones have made spectacular gains in applications too numerous to list (e.g., crop management; mapping, surveillance and fire-fighting systems; weapon systems). But under states of uncertainty or where conflict exists, rational models fail, exactly where interdependence theory thrives. Large, coupled physical or information systems can also experience synergism or dysergism from interdependence. Synergistically, the best human teams are not only highly interdependent, but they also exploit interdependence to reduce uncertainty, the focus of this work-in-progress and roadmap. We have long argued that interdependence is fundamental to human autonomy in teams. But for A-HMTs, no mathematics exists to build from rational theory or social science for their design nor safe or effective operation, a severe weakness. Compared to the rational and traditional social theory, we hope to advance interdependence theory first by mapping similarities between quantum theory and our prior findings; e.g., to maintain interdependence, we previously established that boundaries reduce dysergic effects to allow teams to function (akin to blocking interference to prevent quantum decoherence). Second, we extend our prior findings with case studies to predict with interdependence theory that as uncertainty increases in non-factorable situations for humans, the duality in two-sided beliefs serves debaters who explore alternatives with tradeoffs in the search for the best path going forward. Third, applied to autonomous teams, we conclude that a machine in an A-HMT must be able to express itself to its human teammates in causal language however imperfectly.


Author(s):  
Mica R. Endsley

As autonomous and semiautonomous systems are developed for automotive, aviation, cyber, robotics and other applications, the ability of human operators to effectively oversee and interact with them when needed poses a significant challenge. An automation conundrum exists in which as more autonomy is added to a system, and its reliability and robustness increase, the lower the situation awareness of human operators and the less likely that they will be able to take over manual control when needed. The human–autonomy systems oversight model integrates several decades of relevant autonomy research on operator situation awareness, out-of-the-loop performance problems, monitoring, and trust, which are all major challenges underlying the automation conundrum. Key design interventions for improving human performance in interacting with autonomous systems are integrated in the model, including human–automation interface features and central automation interaction paradigms comprising levels of automation, adaptive automation, and granularity of control approaches. Recommendations for the design of human–autonomy interfaces are presented and directions for future research discussed.


Author(s):  
Mark W. Mueller ◽  
Seung Jae Lee ◽  
Raffaello D’Andrea

The design and control of drones remain areas of active research, and here we review recent progress in this field. In this article, we discuss the design objectives and related physical scaling laws, focusing on energy consumption, agility and speed, and survivability and robustness. We divide the control of such vehicles into low-level stabilization and higher-level planning such as motion planning, and we argue that a highly relevant problem is the integration of sensing with control and planning. Lastly, we describe some vehicle morphologies and the trade-offs that they represent. We specifically compare multicopters with winged designs and consider the effects of multivehicle teams. Expected final online publication date for the Annual Review of Control, Robotics, and Autonomous Systems, Volume 5 is May 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.


Author(s):  
Anthony L. Baker ◽  
Sean M. Fitzhugh ◽  
Daniel E. Forster ◽  
Kristin E. Schaefer

The development of more effective human-autonomy teaming (HAT) will depend on the availability of validated measures of their performance. Communication provides a critical window into a team’s interactions, states, and performance, but much remains to be learned about how to successfully carry over communication measures from the human teaming context to the HAT context. Therefore, the purpose of this paper is to discuss the implementation of three communication assessment methodologies used for two Wingman Joint Capabilities Technology Demonstration field experiments. These field experiments involved Soldiers and Marines maneuvering vehicles and engaging in live-fire target gunnery, all with the assistance of intelligent autonomous systems. Crew communication data were analyzed using aggregate communication flow, relational event models, and linguistic similarity. We discuss how the assessments were implemented, what they revealed about the teaming between humans and autonomy, and lessons learned for future implementation of communication measurement approaches in the HAT context.


Author(s):  
Ruikun Luo ◽  
Na Du ◽  
Kevin Y. Huang ◽  
X. Jessie Yang

Human-autonomy teaming is a major emphasis in the ongoing transformation of future work space wherein human agents and autonomous agents are expected to work as a team. While the increasing complexity in algorithms empowers autonomous systems, one major concern arises from the human factors perspective: Human agents have difficulty deciphering autonomy-generated solutions and increasingly perceive autonomy as a mysterious black box. The lack of transparency could lead to the lack of trust in autonomy and sub-optimal team performance (Chen and Barnes, 2014; Endsley, 2017; Lyons and Havig, 2014; de Visser et al., 2018; Yang et al., 2017). In response to this concern, researchers have investigated ways to enhance autonomy transparency. Existing human factors research on autonomy transparency has largely concentrated on conveying automation reliability or likelihood/(un)certainty information (Beller et al., 2013; McGuirl and Sarter, 2006; Wang et al., 2009; Neyedli et al., 2011). Providing explanations of automation’s behaviors is another way to increase transparency, which leads to higher performance and trust (Dzindolet et al., 2003; Mercado et al., 2016). Specifically, in the context of automated vehicles, studies have showed that informing the drivers of the reasons for the action of automated vehicles decreased drivers’ anxiety, increased their sense of control, preference and acceptance (Koo et al., 2014, 2016; Forster et al., 2017). However, the studies mentioned above largely focused on conveying simple likelihood information or used hand-drafted explanations, with only few exceptions (e.g.(Mercado et al., 2016)). Further research is needed to examine potential design structures of transparency autonomy. In the present study, we wish to propose an option-centric explanation approach, inspired by the research on design rationale. Design rationale is an area of design science focusing on the “representation for explicitly documenting the reasoning and argumentation that make sense of a specific artifact (MacLean et al., 1991)”. The theoretical underpinning for design rationale is that for designers what is important is not just the specific artifact itself but its other possibilities – why an artifact is designed in a particular way compared to how it might otherwise be. We aim to evaluate the effectiveness of the option-centric explanation approach on trust, dependence and team performance. We conducted a human-in-the-loop experiment with 34 participants (Age: Mean = 23.7 years, SD = 2.88 years). We developed a simulated game Treasure Hunter, where participants and an intelligent assistant worked together to uncover a map for treasures. The intelligent assistant’s ability, intent and decision-making rationale was conveyed in the option-centric rationale display. The experiment used a between-subject design with an independent variable – whether the option-centric rationale explanation was provided. The participants were randomly assigned to either of the two explanation conditions. Participants’ trust to the intelligent assistant, confidence of accomplishing the experiment without the intelligent assistant, and workload for the whole session were collected, as well as their scores for each map. The results showed that by conveying the intelligent assistant’s ability, intent and decision-making rationale in the option-centric rationale display, participants had higher task performance. With the display of all the options, participants had a better understanding and overview of the system. Therefore, they could utilize the intelligent assistant more appropriately and earned a higher score. It is notable that every participant only played 10 maps during the whole session. The advantages of option-centric rationale display might be more apparent if more rounds are played in the experiment session. Although not significant at the .05 level, there seems to be a trend suggesting lower levels of workload when the rationale explanation displayed. Our study contributes to the study of human-autonomy teaming by considering the important role of explanation display. It can help human operators build appropriate trust and improve the human-autonomy team performance.


2018 ◽  
Vol 37 (9) ◽  
pp. 904-925 ◽  
Author(s):  
Jonas Lundberg ◽  
Mattias Arvola ◽  
Carl Westin ◽  
Stefan Holmlid ◽  
Mathias Nordvall ◽  
...  

2014 ◽  
Vol 590 ◽  
pp. 667-671
Author(s):  
Fábio Henrique Antunes Vieira ◽  
Carlos Affonso ◽  
Manoel Cléber de Sampaio Alves

Searching for systems with intelligent, flexible, and self-adjusting solutions on imaging, which could provide the contraction of the human operators’ presence, a range of techniques is found. Each one of them can control the process through the assistance of autonomous systems, either software or hardware. Therefore, modeling by traditional computational techniques is quite difficult, considering the complexity and non-linearity of image systems. Compared to traditional models, the approach with Artificial Neural Networks (ANN) behaves well as noise elimination and non-linear data treatment. Consequently, the challenges in the wood industry justify the use of ANN as a tool for process improvement and, therefore, add value to the final product. Additionally, the Artificial Intelligence techniques, such as Neuro-Fuzzy Networks (NFN), have shown efficient, since they combine the ability to learn from examples and to generalize the learned information from the ANNs with the capacity of Fuzzy Logic, in order to transform linguistic variables in rules. Then, ANFIS plays active roles in an effort to reach a specific goal.


Author(s):  
John Murray ◽  
Yili Liu

The identification of problems from numeric traffic measurements is an important part of control center activities in ATMS (Advanced Traffic Management Systems). However, an information modeling process that relies solely upon ‘traditional’ quantitative data analysis does not reflect faithfully the actual methods used by human operators. In addition to common-sense knowledge and specific contextual information, operators also use various heuristics and rules-of-thumb to supplement the numerical analysis. This paper describes an experiment to examine the effectiveness of an expert system that integrates quantitative and qualitative traffic information using a human-centered knowledge system design. The system's performance was investigated using a data suite of real traffic scenarios; the statistically significant results showed that the integrated process had superior performance compared to the ‘traditional’ quantitative analysis running alone.


Author(s):  
Chad L. Stephens ◽  
Kellie D. Kennedy ◽  
Brenda L. Crook ◽  
Ralph A. Williams ◽  
Paul Schutte

An experiment investigated the impact of normobaric hypoxia induction on aircraft pilot performance to specifically evaluate the use of hypoxia as a method to induce mild cognitive impairment to explore human-autonomous systems integration opportunities. Results of this exploratory study show that the effect of 15,000 feet simulated altitude did not induce cognitive deficits as indicated by performance on written, computer-based, or simulated flight tasks. However, the subjective data demonstrated increased effort by the human test subject pilots to maintain equivalent performance in a flight simulation task. This study represents current research intended to add to the current knowledge of performance decrement and pilot workload assessment to improve automation support and increase aviation safety.


Sign in / Sign up

Export Citation Format

Share Document