Congressional Briefing Illustrates the Importance of Human Factors in Autonomous Systems Research and Design

2016 ◽  
Author(s):  
Barbara Wanchisen ◽  
Nancy Cooke ◽  
Mary Cummings ◽  
Robin Murphy
2022 ◽  
Author(s):  
Benjamin N. Kelley ◽  
Walter J. Waltz ◽  
Andrew Miloslavsky ◽  
Ralph A. Williams ◽  
Abraham K. Ishihara ◽  
...  

1962 ◽  
Vol 13 (4) ◽  
pp. 355-357
Author(s):  
Patrick Rivett

Author(s):  
Ruikun Luo ◽  
Na Du ◽  
Kevin Y. Huang ◽  
X. Jessie Yang

Human-autonomy teaming is a major emphasis in the ongoing transformation of future work space wherein human agents and autonomous agents are expected to work as a team. While the increasing complexity in algorithms empowers autonomous systems, one major concern arises from the human factors perspective: Human agents have difficulty deciphering autonomy-generated solutions and increasingly perceive autonomy as a mysterious black box. The lack of transparency could lead to the lack of trust in autonomy and sub-optimal team performance (Chen and Barnes, 2014; Endsley, 2017; Lyons and Havig, 2014; de Visser et al., 2018; Yang et al., 2017). In response to this concern, researchers have investigated ways to enhance autonomy transparency. Existing human factors research on autonomy transparency has largely concentrated on conveying automation reliability or likelihood/(un)certainty information (Beller et al., 2013; McGuirl and Sarter, 2006; Wang et al., 2009; Neyedli et al., 2011). Providing explanations of automation’s behaviors is another way to increase transparency, which leads to higher performance and trust (Dzindolet et al., 2003; Mercado et al., 2016). Specifically, in the context of automated vehicles, studies have showed that informing the drivers of the reasons for the action of automated vehicles decreased drivers’ anxiety, increased their sense of control, preference and acceptance (Koo et al., 2014, 2016; Forster et al., 2017). However, the studies mentioned above largely focused on conveying simple likelihood information or used hand-drafted explanations, with only few exceptions (e.g.(Mercado et al., 2016)). Further research is needed to examine potential design structures of transparency autonomy. In the present study, we wish to propose an option-centric explanation approach, inspired by the research on design rationale. Design rationale is an area of design science focusing on the “representation for explicitly documenting the reasoning and argumentation that make sense of a specific artifact (MacLean et al., 1991)”. The theoretical underpinning for design rationale is that for designers what is important is not just the specific artifact itself but its other possibilities – why an artifact is designed in a particular way compared to how it might otherwise be. We aim to evaluate the effectiveness of the option-centric explanation approach on trust, dependence and team performance. We conducted a human-in-the-loop experiment with 34 participants (Age: Mean = 23.7 years, SD = 2.88 years). We developed a simulated game Treasure Hunter, where participants and an intelligent assistant worked together to uncover a map for treasures. The intelligent assistant’s ability, intent and decision-making rationale was conveyed in the option-centric rationale display. The experiment used a between-subject design with an independent variable – whether the option-centric rationale explanation was provided. The participants were randomly assigned to either of the two explanation conditions. Participants’ trust to the intelligent assistant, confidence of accomplishing the experiment without the intelligent assistant, and workload for the whole session were collected, as well as their scores for each map. The results showed that by conveying the intelligent assistant’s ability, intent and decision-making rationale in the option-centric rationale display, participants had higher task performance. With the display of all the options, participants had a better understanding and overview of the system. Therefore, they could utilize the intelligent assistant more appropriately and earned a higher score. It is notable that every participant only played 10 maps during the whole session. The advantages of option-centric rationale display might be more apparent if more rounds are played in the experiment session. Although not significant at the .05 level, there seems to be a trend suggesting lower levels of workload when the rationale explanation displayed. Our study contributes to the study of human-autonomy teaming by considering the important role of explanation display. It can help human operators build appropriate trust and improve the human-autonomy team performance.


Systems ◽  
2020 ◽  
Vol 8 (1) ◽  
pp. 8 ◽  
Author(s):  
Gene M. Alarcon ◽  
Charles Walter ◽  
Anthony M. Gibson ◽  
Rose F. Gamble ◽  
August Capiola ◽  
...  

Automation and autonomous systems are quickly becoming a more engrained aspect of modern society. The need for effective, secure computer code in a timely manner has led to the creation of automated code repair techniques to resolve issues quickly. However, the research to date has largely ignored the human factors aspects of automated code repair. The current study explored trust perceptions, reuse intentions, and trust intentions in code repair with human generated patches versus automated code repair patches. In addition, comments in the headers were manipulated to determine the effect of the presence or absence of comments in the header of the code. Participants were 51 programmers with at least 3 years’ experience and knowledge of the C programming language. Results indicated only repair source (human vs. automated code repair) had a significant influence on trust perceptions and trust intentions. Specifically, participants consistently reported higher levels of perceived trustworthiness, intentions to reuse, and trust intentions for human referents compared to automated code repair. No significant effects were found for comments in the headers.


2021 ◽  
Vol 5 (Supplement_1) ◽  
pp. 660-661
Author(s):  
Maria Pena ◽  
Jared Carrillo ◽  
Nonna Milyavskaya ◽  
Thomas Chan

Abstract Many autonomous systems are being developed to assist older adults to age in place. However, there is little research related to the human factors associated with why older adults may initially and continuously trust these autonomous systems. More research in this area on older adults and trust in autonomy is needed to facilitate the technologies better everyday use. The current study conducted a literature review on the prevalent human factors that enable people to trust their interactions with smart technologies (e.g., artificial intelligence, navigational structures). Articles were collected from various disciplines on concepts such as trust in autonomy, human-computer interactions and teamwork. Thematic analysis revealed two convergent areas that were associated with initial and continuous trust: human and technological characteristics. Human characteristics are defined by a person’s ability to understand and use autonomous systems. Generally, people with higher competency and abilities with autonomous systems demonstrated the ease of use to carry out desired actions with smart technology. Technological characteristics are defined by the system’s performance, explainability, and its intended purpose between trust. Essentially, people were less critical of autonomous systems that were perceived to be useful, transparent, and predictable. Overall, the autonomous system's ability to perform its intended purpose and the users knowledge and technical qualifications dominate the relationship between initial and continuous trust with autonomous systems. These are the prevalent factors that need to be considered for the creation of trusted autonomous technologies for older adults to help them age in the approaching more advanced technological world.


2014 ◽  
Vol 6 (4) ◽  
pp. 49-66
Author(s):  
Graham Peter Pervan ◽  
David Arnott

This research project was principally motivated by a concern for the direction and relevance of research in systems that support group work and negotiation. The main areas of research focus are the publication frequency and outlets for GSS and NSS research, the research strategies used in published articles, and the professional relevance of the research. The project has analysed 383 GSS articles and 82 NSS articles published in 16 major journals from 1990 to 2010. The findings indicate a significant dependence on the Journal of Group Decision and Negotiation but represent an opportunity for newer journals such as the International Journal of Decision Support Systems Technology. Other issues include a focus on experimental research and design science, weak theoretical foundations and research methodologies, and a focus on operational level problems. Of great concern is the finding that GSS and NSS research has relatively low professional and managerial relevance. Eight key strategies for dealing with these issues are recommended.


Sign in / Sign up

Export Citation Format

Share Document