scholarly journals Balancing Performance and Human Autonomy With Implicit Guidance Agent

2021 ◽  
Vol 4 ◽  
Author(s):  
Ryo Nakahashi ◽  
Seiji Yamada

The human-agent team, which is a problem in which humans and autonomous agents collaborate to achieve one task, is typical in human-AI collaboration. For effective collaboration, humans want to have an effective plan, but in realistic situations, they might have difficulty calculating the best plan due to cognitive limitations. In this case, guidance from an agent that has many computational resources may be useful. However, if an agent guides the human behavior explicitly, the human may feel that they have lost autonomy and are being controlled by the agent. We therefore investigated implicit guidance offered by means of an agent’s behavior. With this type of guidance, the agent acts in a way that makes it easy for the human to find an effective plan for a collaborative task, and the human can then improve the plan. Since the human improves their plan voluntarily, he or she maintains autonomy. We modeled a collaborative agent with implicit guidance by integrating the Bayesian Theory of Mind into existing collaborative-planning algorithms and demonstrated through a behavioral experiment that implicit guidance is effective for enabling humans to maintain a balance between improving their plans and retaining autonomy.

2020 ◽  
Vol 34 (4) ◽  
pp. 143-164
Author(s):  
Peter C. Kipp ◽  
Mary B. Curtis ◽  
Ziyin Li

SYNOPSIS Advances in IT suggest that computerized intelligent agents (IAs) may soon occupy many roles that presently employ human agents. A significant concern is the ethical conduct of those who use IAs, including their possible utilization by managers to engage in earnings management. We investigate how financial reporting decisions are affected when they are supported by the work of an IA versus a human agent, with varying autonomy. In an experiment with experienced managers, we vary agent type (human versus IA) and autonomy (more versus less), finding that managers engage in less aggressive financial reporting decisions with IAs than with human agents, and engage in less aggressive reporting decisions with less autonomous agents than with more autonomous agents. Managers' perception of control over their agent and ability to diffuse their own responsibility for financial reporting decisions explain the effect of agent type and autonomy on managers' financial reporting decisions.


Author(s):  
Wan Ching Ho ◽  
Kerstin Dautenhahn ◽  
Meiyii Lim ◽  
Sibylle Enz ◽  
Carsten Zoll ◽  
...  

This article presents research towards the development of a virtual learning environment (VLE) inhabited by intelligent virtual agents (IVAs) and modelling a scenario of inter-cultural interactions. The ultimate aim of this VLE is to allow users to reflect upon and learn about intercultural communication and collaboration. Rather than predefining the interactions among the virtual agents and scripting the possible interactions afforded by this environment, we pursue a bottom-up approach whereby inter-cultural communication emerges from interactions with and among autonomous agents and the user(s). The intelligent virtual agents that are inhabiting this environment are expected to be able to broaden their knowledge about the world and other agents, which may be of different cultural backgrounds, through interactions. This work is part of a collaborative effort within a European research project called eCIRCUS. Specifically, this article focuses on our continuing research concerned with emotional knowledge learning in autobiographic social agents.


2010 ◽  
pp. 602-621
Author(s):  
Wan Ching Ho ◽  
Kerstin Dautenhahn ◽  
Meiyii Lim ◽  
Sibylle Enz ◽  
Carsten Zoll ◽  
...  

This article presents research towards the development of a virtual learning environment (VLE) inhabited by intelligent virtual agents (IVAs) and modelling a scenario of inter-cultural interactions. The ultimate aim of this VLE is to allow users to reflect upon and learn about intercultural communication and collaboration. Rather than predefining the interactions among the virtual agents and scripting the possible interactions afforded by this environment, we pursue a bottom-up approach whereby inter-cultural communication emerges from interactions with and among autonomous agents and the user(s). The intelligent virtual agents that are inhabiting this environment are expected to be able to broaden their knowledge about the world and other agents, which may be of different cultural backgrounds, through interactions. This work is part of a collaborative effort within a European research project called eCIRCUS. Specifically, this article focuses on our continuing research concerned with emotional knowledge learning in autobiographic social agents.


2011 ◽  
pp. 204-224 ◽  
Author(s):  
Fernand Gobet ◽  
Peter C.R. Logan

This chapter provides an introduction to the CHREST architecture of cognition and shows how this architecture can help develop a full theory of mind. After describing the main components and mechanisms of the architecture, we discuss several domains where it has already been successfully applied, such as in the psychology of expert behaviour, the acquisition of language by children, and the learning of multiple representations in physics. We highlight the characteristics of CHREST that enable it to account for empirical data, including self-organisation, an emphasis on cognitive limitations, the presence of a perception-learning cycle, and the use of naturalistic data as input for learning. We argue that some of these characteristics can help shed light on the hard questions facing theorists developing a full theory of mind, such as intuition, the acquisition and use of concepts, the link between cognition and emotions, and the role of embodiment.


Author(s):  
Ruikun Luo ◽  
Na Du ◽  
Kevin Y. Huang ◽  
X. Jessie Yang

Human-autonomy teaming is a major emphasis in the ongoing transformation of future work space wherein human agents and autonomous agents are expected to work as a team. While the increasing complexity in algorithms empowers autonomous systems, one major concern arises from the human factors perspective: Human agents have difficulty deciphering autonomy-generated solutions and increasingly perceive autonomy as a mysterious black box. The lack of transparency could lead to the lack of trust in autonomy and sub-optimal team performance (Chen and Barnes, 2014; Endsley, 2017; Lyons and Havig, 2014; de Visser et al., 2018; Yang et al., 2017). In response to this concern, researchers have investigated ways to enhance autonomy transparency. Existing human factors research on autonomy transparency has largely concentrated on conveying automation reliability or likelihood/(un)certainty information (Beller et al., 2013; McGuirl and Sarter, 2006; Wang et al., 2009; Neyedli et al., 2011). Providing explanations of automation’s behaviors is another way to increase transparency, which leads to higher performance and trust (Dzindolet et al., 2003; Mercado et al., 2016). Specifically, in the context of automated vehicles, studies have showed that informing the drivers of the reasons for the action of automated vehicles decreased drivers’ anxiety, increased their sense of control, preference and acceptance (Koo et al., 2014, 2016; Forster et al., 2017). However, the studies mentioned above largely focused on conveying simple likelihood information or used hand-drafted explanations, with only few exceptions (e.g.(Mercado et al., 2016)). Further research is needed to examine potential design structures of transparency autonomy. In the present study, we wish to propose an option-centric explanation approach, inspired by the research on design rationale. Design rationale is an area of design science focusing on the “representation for explicitly documenting the reasoning and argumentation that make sense of a specific artifact (MacLean et al., 1991)”. The theoretical underpinning for design rationale is that for designers what is important is not just the specific artifact itself but its other possibilities – why an artifact is designed in a particular way compared to how it might otherwise be. We aim to evaluate the effectiveness of the option-centric explanation approach on trust, dependence and team performance. We conducted a human-in-the-loop experiment with 34 participants (Age: Mean = 23.7 years, SD = 2.88 years). We developed a simulated game Treasure Hunter, where participants and an intelligent assistant worked together to uncover a map for treasures. The intelligent assistant’s ability, intent and decision-making rationale was conveyed in the option-centric rationale display. The experiment used a between-subject design with an independent variable – whether the option-centric rationale explanation was provided. The participants were randomly assigned to either of the two explanation conditions. Participants’ trust to the intelligent assistant, confidence of accomplishing the experiment without the intelligent assistant, and workload for the whole session were collected, as well as their scores for each map. The results showed that by conveying the intelligent assistant’s ability, intent and decision-making rationale in the option-centric rationale display, participants had higher task performance. With the display of all the options, participants had a better understanding and overview of the system. Therefore, they could utilize the intelligent assistant more appropriately and earned a higher score. It is notable that every participant only played 10 maps during the whole session. The advantages of option-centric rationale display might be more apparent if more rounds are played in the experiment session. Although not significant at the .05 level, there seems to be a trend suggesting lower levels of workload when the rationale explanation displayed. Our study contributes to the study of human-autonomy teaming by considering the important role of explanation display. It can help human operators build appropriate trust and improve the human-autonomy team performance.


Author(s):  
Thomas O’Neill ◽  
Nathan McNeese ◽  
Amy Barron ◽  
Beau Schelble

Objective We define human–autonomy teaming and offer a synthesis of the existing empirical research on the topic. Specifically, we identify the research environments, dependent variables, themes representing the key findings, and critical future research directions. Background Whereas a burgeoning literature on high-performance teamwork identifies the factors critical to success, much less is known about how human–autonomy teams (HATs) achieve success. Human–autonomy teamwork involves humans working interdependently toward a common goal along with autonomous agents. Autonomous agents involve a degree of self-government and self-directed behavior (agency), and autonomous agents take on a unique role or set of tasks and work interdependently with human team members to achieve a shared objective. Method We searched the literature on human–autonomy teaming. To meet our criteria for inclusion, the paper needed to involve empirical research and meet our definition of human–autonomy teaming. We found 76 articles that met our criteria for inclusion. Results We report on research environments and we find that the key independent variables involve autonomous agent characteristics, team composition, task characteristics, human individual differences, training, and communication. We identify themes for each of these and discuss the future research needs. Conclusion There are areas where research findings are clear and consistent, but there are many opportunities for future research. Particularly important will be research that identifies mechanisms linking team input to team output variables.


Philosophies ◽  
2020 ◽  
Vol 5 (3) ◽  
pp. 12
Author(s):  
Lorenzo Magnani

Research on autonomy exhibits a constellation of variegated perspectives, from the problem of the crude deprivation of it to the study of the distinction between personal and moral autonomy, and from the problem of the role of a “self as narrator”, who classifies its own actions as autonomous or not, to the importance of the political side and, finally, to the need of defending and enhancing human autonomy. My precise concern in this article will be the examination of the role of the human cognitive processes that give rise to the most important ways of tracking the external world and human behavior in their relationship to some central aspects of human autonomy, also to the aim of clarifying the link between autonomy and the ownership of our own destinies. I will also focus on the preservation of human autonomy as an important component of human dignity, seeing it as strictly associated with knowledge and, even more significantly, with the constant production of new and pertinent knowledge of various kinds. I will also describe the important paradox of autonomy, which resorts to the fact that, on one side, cognitions (from science to morality, from common knowledge to philosophy, etc.) are necessary to be able to perform autonomous actions and decisions because we need believe in rules that justify and identify our choices, but, on the other side, these same rules can become (for example, as a result of contrasting with other internalized and approved moral rules or knowledge contents) oppressive norms that diminish autonomy and can thus, paradoxically, defeat agents’ autonomous capacity “to take ownership”.


Author(s):  
Douglas W. Lee ◽  
Daniel W. Fitzick ◽  
Ellen J. Bass

In systems that support dynamic allocation of work across human and autonomous agents, analyzing the implications of task sharing can support operational concept development. Computational tools should address not only the taskwork but also the teamwork emerging from the allocation. This paper describes a computational human agent model that manages work by executing or delaying the execution of the task, or by delegating activities to other agents. The agent considers its capacity and strategies for delegation to coordinate with other agents. Using a framework for simulating multiple types of agents, case studies apply this computational human agent model to the evaluation of a concept of operation that distributes work across an air traffic controller capable of delegating and flight deck crews. The case studies show how capacity changes agent utilization and delegation strategies redistribute taskwork across multiple agents while creating teamwork demands.


Author(s):  
Huao Li ◽  
Keyang Zheng ◽  
Michael Lewis ◽  
Dana Hughes ◽  
Katia Sycara

The ability to make inferences about other’s mental state is referred to as having a Theory of Mind (ToM). Such ability is the foundation of many human social interactions such as empathy, teamwork, and communication. As intelligent agents being involved in diverse human-agent teams, they are also expected to be socially intelligent to become effective teammates. To provide a feasible baseline for future social intelligent agents, this paper presents a experimental study on the process of human ToM reference. Human observers’ inferences are compared with participants’ verbally reported mental state in a simulated search and rescue task. Results show that ToM inference is a challenging task even for experienced human observers.


2022 ◽  
pp. 35-58
Author(s):  
Ozge Doguc

Many software automation techniques have been developed in the last decade to cut down cost, improve customer satisfaction, and reduce errors. Robotic process automation (RPA) has become increasingly popular recently. RPA offers software robots (bots) that can mimic human behavior. Attended robots work in tandem with humans and can operate while the human agent is active on the computer. On the other hand, unattended robots operate behind locked screens and are designed to execute automations that don't require any human intervention. RPA robots are equipped with artificial intelligence engines such as computer vision and machine learning, and both robot types can learn automations by recording human actions.


Sign in / Sign up

Export Citation Format

Share Document