scholarly journals Enhancing Transparency in Human-autonomy Teaming via the Option-centric Rationale Display

Author(s):  
Ruikun Luo ◽  
Na Du ◽  
Kevin Y. Huang ◽  
X. Jessie Yang

Human-autonomy teaming is a major emphasis in the ongoing transformation of future work space wherein human agents and autonomous agents are expected to work as a team. While the increasing complexity in algorithms empowers autonomous systems, one major concern arises from the human factors perspective: Human agents have difficulty deciphering autonomy-generated solutions and increasingly perceive autonomy as a mysterious black box. The lack of transparency could lead to the lack of trust in autonomy and sub-optimal team performance (Chen and Barnes, 2014; Endsley, 2017; Lyons and Havig, 2014; de Visser et al., 2018; Yang et al., 2017). In response to this concern, researchers have investigated ways to enhance autonomy transparency. Existing human factors research on autonomy transparency has largely concentrated on conveying automation reliability or likelihood/(un)certainty information (Beller et al., 2013; McGuirl and Sarter, 2006; Wang et al., 2009; Neyedli et al., 2011). Providing explanations of automation’s behaviors is another way to increase transparency, which leads to higher performance and trust (Dzindolet et al., 2003; Mercado et al., 2016). Specifically, in the context of automated vehicles, studies have showed that informing the drivers of the reasons for the action of automated vehicles decreased drivers’ anxiety, increased their sense of control, preference and acceptance (Koo et al., 2014, 2016; Forster et al., 2017). However, the studies mentioned above largely focused on conveying simple likelihood information or used hand-drafted explanations, with only few exceptions (e.g.(Mercado et al., 2016)). Further research is needed to examine potential design structures of transparency autonomy. In the present study, we wish to propose an option-centric explanation approach, inspired by the research on design rationale. Design rationale is an area of design science focusing on the “representation for explicitly documenting the reasoning and argumentation that make sense of a specific artifact (MacLean et al., 1991)”. The theoretical underpinning for design rationale is that for designers what is important is not just the specific artifact itself but its other possibilities – why an artifact is designed in a particular way compared to how it might otherwise be. We aim to evaluate the effectiveness of the option-centric explanation approach on trust, dependence and team performance. We conducted a human-in-the-loop experiment with 34 participants (Age: Mean = 23.7 years, SD = 2.88 years). We developed a simulated game Treasure Hunter, where participants and an intelligent assistant worked together to uncover a map for treasures. The intelligent assistant’s ability, intent and decision-making rationale was conveyed in the option-centric rationale display. The experiment used a between-subject design with an independent variable – whether the option-centric rationale explanation was provided. The participants were randomly assigned to either of the two explanation conditions. Participants’ trust to the intelligent assistant, confidence of accomplishing the experiment without the intelligent assistant, and workload for the whole session were collected, as well as their scores for each map. The results showed that by conveying the intelligent assistant’s ability, intent and decision-making rationale in the option-centric rationale display, participants had higher task performance. With the display of all the options, participants had a better understanding and overview of the system. Therefore, they could utilize the intelligent assistant more appropriately and earned a higher score. It is notable that every participant only played 10 maps during the whole session. The advantages of option-centric rationale display might be more apparent if more rounds are played in the experiment session. Although not significant at the .05 level, there seems to be a trend suggesting lower levels of workload when the rationale explanation displayed. Our study contributes to the study of human-autonomy teaming by considering the important role of explanation display. It can help human operators build appropriate trust and improve the human-autonomy team performance.

Author(s):  
Maya S. Luster ◽  
Brandon J. Pitts

In the field of Human Factors, the concept of trust in automation can help to explain how and why users interact with particular systems. One way to examine trust is through task performance and/or behavioral observations. Previous work has identified several system-related moderators of trust in automation, such as reliability and complexity. However, the effects of system certainty, i.e., the knowledge that a machine has regarding its own decision-making abilities, on trust remains unclear. The goal of this study was to examine the extent to which system certainty affects perceived trust. Participants performed a partially simulated flight task and decided what action to take in response to targets in the environment detected by the aircraft’s automation. The automation’s certainty levels in recognizing targets were 30%, 50%, and 80%. Overall, participants accepted the system’s recommendation regardless of the certainty level and trust in the system increased as the system’s certainty level increased. Results may help to inform the development of future autonomous systems.


Author(s):  
Juan Marcelo Parra-Ullauri ◽  
Antonio García-Domínguez ◽  
Nelly Bencomo ◽  
Changgang Zheng ◽  
Chen Zhen ◽  
...  

AbstractModern software systems are increasingly expected to show higher degrees of autonomy and self-management to cope with uncertain and diverse situations. As a consequence, autonomous systems can exhibit unexpected and surprising behaviours. This is exacerbated due to the ubiquity and complexity of Artificial Intelligence (AI)-based systems. This is the case of Reinforcement Learning (RL), where autonomous agents learn through trial-and-error how to find good solutions to a problem. Thus, the underlying decision-making criteria may become opaque to users that interact with the system and who may require explanations about the system’s reasoning. Available work for eXplainable Reinforcement Learning (XRL) offers different trade-offs: e.g. for runtime explanations, the approaches are model-specific or can only analyse results after-the-fact. Different from these approaches, this paper aims to provide an online model-agnostic approach for XRL towards trustworthy and understandable AI. We present ETeMoX, an architecture based on temporal models to keep track of the decision-making processes of RL systems. In cases where the resources are limited (e.g. storage capacity or time to response), the architecture also integrates complex event processing, an event-driven approach, for detecting matches to event patterns that need to be stored, instead of keeping the entire history. The approach is applied to a mobile communications case study that uses RL for its decision-making. In order to test the generalisability of our approach, three variants of the underlying RL algorithms are used: Q-Learning, SARSA and DQN. The encouraging results show that using the proposed configurable architecture, RL developers are able to obtain explanations about the evolution of a metric, relationships between metrics, and were able to track situations of interest happening over time windows.


Author(s):  
Nathan J. McNeese ◽  
Mustafa Demir ◽  
Nancy J. Cooke ◽  
Christopher Myers

Objective Three different team configurations are compared with the goal of better understanding human-autonomy teaming (HAT). Background Although an extensive literature on human-automation interaction exists, much less is known about HAT in which humans and autonomous agents interact as coordinated units. Further research must be conducted to better understand how all-human teams compare to HAT. Methods In an unmanned aerial system (UAS) context, a comparison was made among three types of three-member teams: (1) synthetic teams in which the pilot role is assigned to a synthetic teammate, (2) control teams in which the pilot was an inexperienced human, and (3) experimenter teams in which an experimenter served as an experienced pilot. Ten of each type of team participated. Measures of team performance, target processing efficiency, team situation awareness, and team verbal behaviors were analyzed. Results Synthetic teams performed as well at the mission level as control (all human) teams but processed targets less efficiently. Experimenter teams performed better across all other measures compared to control and synthetic teams. Conclusion Though there is potential for a synthetic agent to function as a full-fledged teammate, further advances in autonomy are needed to improve team-level dynamics in HAT teams. Application This research contributes to our understanding of how to make autonomy a good team player.


2017 ◽  
Author(s):  
Eugenia Isabel Gorlin ◽  
Michael W. Otto

To live well in the present, we take direction from the past. Yet, individuals may engage in a variety of behaviors that distort their past and current circumstances, reducing the likelihood of adaptive problem solving and decision making. In this article, we attend to self-deception as one such class of behaviors. Drawing upon research showing both the maladaptive consequences and self-perpetuating nature of self-deception, we propose that self-deception is an understudied risk and maintaining factor for psychopathology, and we introduce a “cognitive-integrity”-based approach that may hold promise for increasing the reach and effectiveness of our existing therapeutic interventions. Pending empirical validation of this theoretically-informed approach, we posit that patients may become more informed and autonomous agents in their own therapeutic growth by becoming more honest with themselves.


Author(s):  
Mirette Dubé ◽  
Jason Laberge ◽  
Elaine Sigalet ◽  
Jonas Shultz ◽  
Christine Vis ◽  
...  

Purpose: The aim of this article is to provide a case study example of the preopening phase of an interventional trauma operating room (ITOR) using systems-focused simulation and human factor evaluations for healthcare environment commissioning. Background: Systems-focused simulation, underpinned by human factors science, is increasingly being used as a quality improvement tool to test and evaluate healthcare spaces with the stakeholders that use them. Purposeful real-to-life simulated events are rehearsed to allow healthcare teams opportunity to identify what is working well and what needs improvement within the work system such as tasks, environments, and processes that support the delivery of healthcare services. This project highlights salient evaluation objectives and methods used within the clinical commissioning phase of one of the first ITORs in Canada. Methods: A multistaged evaluation project to support clinical commissioning was facilitated engaging 24 stakeholder groups. Key evaluation objectives highlighted include the evaluation of two transport routes, switching of operating room (OR) tabletops, the use of the C-arm, and timely access to lead in the OR. Multiple evaluation methods were used including observation, debriefing, time-based metrics, distance wheel metrics, equipment adjustment counts, and other transport route considerations. Results: The evaluation resulted in several types of data that allowed for informed decision making for the most effective, efficient, and safest transport route for an exsanguinating trauma patient and healthcare team; improved efficiencies in use of the C-arm, significantly reduced the time to access lead; and uncovered a new process for switching OR tabletop due to safety threats identified.


2021 ◽  
Vol 25 (1) ◽  
pp. 51-72
Author(s):  
Nathan J. McNeese ◽  
Mustafa Demir ◽  
Erin K. Chiou ◽  
Nancy J. Cooke

2021 ◽  
Vol 13 (4) ◽  
pp. 1948
Author(s):  
Qiaoning Zhang ◽  
Xi Jessie Yang ◽  
Lionel P. Robert

Automated vehicles (AV) have the potential to benefit our society. Providing explanations is one approach to facilitating AV trust by decreasing uncertainty about automated decision-making. However, it is not clear whether explanations are equally beneficial for drivers across age groups in terms of trust and anxiety. To examine this, we conducted a mixed-design experiment with 40 participants divided into three age groups (i.e., younger, middle-age, and older). Participants were presented with: (1) no explanation, or (2) explanation given before or (3) after the AV took action, or (4) explanation along with a request for permission to take action. Results highlight both commonalities and differences between age groups. These results have important implications in designing AV explanations and promoting trust.


Sign in / Sign up

Export Citation Format

Share Document