scholarly journals Implicit theories of malleability in machines influence the perception and trust repair of intelligent agent

2021 ◽  
Author(s):  
Taenyun Kim ◽  
Hayeon Song

After an intelligent agent makes an error, trust repair can be attempted to regain lost trust. While several ways are possible, individuals' underlying perception of malleability in machines--implicit theory-- can also influence the agent's trust repair process. In this study, we investigated the influence of implicit theory of machines on intelligent agents' apology after the trust violation. A 2 (implicit theory: Incremental vs. Entity) X 2 (apology attribution: Internal vs. External) between-subject design experiment of simulated stock market investment was conducted (N = 150) via online. Participants were given a situation in which they had to make investment decisions based on the recommendation of an artificial intelligence agent. We created an investment game consist of 40 investment opportunities to see the process of trust development, trust violation, and trust repair. The results show that trust damaged less severely in Incremental rather than Entity implicit theory condition and External rather than internal attribution apology condition after the trust violation. However, trust recovered more highly in Entity-External condition. We discussed both theoretical and practical implications.

2020 ◽  
Author(s):  
Taenyun Kim ◽  
Hayeon Song

Trust is essential in individuals' perception, behavior, and evaluation of intelligent agents. Indeed, it is the primary motive for people to accept new technology. Thus, it is crucial to repair trust in the event when it is damaged. This study investigated how intelligent agents should apologize to recover trust and how the effectiveness of the apology is different when the agent is humanlike compared to machine-like based on two seemingly competing frameworks of the CASA (Computers-Are-Social-Actors) paradigm and automation bias. A 2 (agent: Human-like vs. Machine-like) X 2 (apology attribution: Internal vs. External) between-subject design experiment was conducted (N = 193) in the context of the stock market. Participants were presented with a scenario in which they were supposed to make investment choices with the help of an artificial intelligence agent's advice. To see the trajectory of initial trust-building, trust violation, and trust repair process, we designed an investment game that consists of 5 rounds of 8 investment choices (in total, 40 investment choices). The results show that trust was repaired more efficiently when a human-like agent apologizes with internal compared to external attribution. However, the opposite pattern was observed among participants who had machine-like agents; the external compared to internal attribution condition showed better trust repair. Both theoretical and practical implications are discussed.


2021 ◽  
Vol 35 (2) ◽  
Author(s):  
E. S. Kox ◽  
J. H. Kerstholt ◽  
T. F. Hueting ◽  
P. W. de Vries

AbstractThe role of intelligent agents becomes more social as they are expected to act in direct interaction, involvement and/or interdependency with humans and other artificial entities, as in Human-Agent Teams (HAT). The highly interdependent and dynamic nature of teamwork demands correctly calibrated trust among team members. Trust violations are an inevitable aspect of the cycle of trust and since repairing damaged trust proves to be more difficult than building trust initially, effective trust repair strategies are needed to ensure durable and successful team performance. The aim of this study was to explore the effectiveness of different trust repair strategies from an intelligent agent by measuring the development of human trust and advice taking in a Human-Agent Teaming task. Data for this study were obtained using a task environment resembling a first-person shooter game. Participants carried out a mission in collaboration with their artificial team member. A trust violation was provoked when the agent failed to detect an approaching enemy. After this, the agent offered one of four trust repair strategies, composed of the apology components explanation and expression of regret (either one alone, both or neither). Our results indicated that expressing regret was crucial for effective trust repair. After trust declined due to the violation by the agent, trust only significantly recovered when an expression of regret was included in the apology. This effect was stronger when an explanation was added. In this context, the intelligent agent was the most effective in its attempt of rebuilding trust when it provided an apology that was both affective, and informational. Finally, the implications of our findings for the design and study of Human-Agent trust repair are discussed.


2001 ◽  
Vol 5 (2) ◽  
pp. 169-182 ◽  
Author(s):  
Michael W. Morris ◽  
Tanya Menon ◽  
Daniel R. Ames

Many tendencies in social perceivers' judgments about individuals and groups can be integrated in terms of the premise that perceivers rely on implicit theories of agency acquired from cultural traditions. Whereas American culture primarily conceptualizes agency as a property of individual persons, other cultures conceptualize agency primarily in terms of collectives such as groups or nonhuman actors such as deities or fate. Cultural conceptions of agency exist in public forms (discourses, texts, and institutions) and private forms (perceivers' knowledge structures), and the more prominent the public representations of a specific conception in a society, the more chronically accessible it will be in perceivers' minds. We review evidence for these claims by contrasting North American and Chinese cultures. From this integrative model of social perception as mediated by agency conceptions, we draw insights for research on implicit theories and research on culture. What implicit theory research gains is a better grasp on the content, origins, and variation of the knowledge structures central to social perception. What cultural psychology gains is a middle-range model of the mechanism underlying cultural influence on dispositional attribution, which yields precise predictions about the domain specificity and dynamics of cultural differences.


2019 ◽  
Vol 10 (1) ◽  
pp. 24-45
Author(s):  
Samuel Allen Alexander

Abstract Legg and Hutter, as well as subsequent authors, considered intelligent agents through the lens of interaction with reward-giving environments, attempting to assign numeric intelligence measures to such agents, with the guiding principle that a more intelligent agent should gain higher rewards from environments in some aggregate sense. In this paper, we consider a related question: rather than measure numeric intelligence of one Legg-Hutter agent, how can we compare the relative intelligence of two Legg-Hutter agents? We propose an elegant answer based on the following insight: we can view Legg-Hutter agents as candidates in an election, whose voters are environments, letting each environment vote (via its rewards) which agent (if either) is more intelligent. This leads to an abstract family of comparators simple enough that we can prove some structural theorems about them. It is an open question whether these structural theorems apply to more practical intelligence measures.


2020 ◽  
Vol 9 (3) ◽  
pp. 1159-1166
Author(s):  
Budi Laksono Putro ◽  
Yusep Rosmansyah ◽  
Suhardi Suhardi

Group development is the first and most important step for the success of collaborative problem solving (CPS) learning in the digital learning environment (DLE). A literacy study is needed for studies in the intelligent agent domain for group development of collaborative learning in DLE. This paper is a systematic literature review (SLR) of intelligent agents for group formation from 2001 to 2019. This paper aims to find answers to 4 (four) research questions, namely: 1) What components to develop intelligent agents for group development; 2) What is the intelligent agent model for group development; 3) How are the metrics for measuring intelligent agent performance; and 4) How is the Framework for developing intelligent agent. The components of the intelligent agent model consist of: member attributes, group attributes (group constraints), and intelligent techniques. This research refers to Srba and Bielikova's group development model. The stages of the model are formation, performing and closing. An intelligent agent model at the formation stage. A performance metric for the intelligent agent at the performance stage. The framework for developing an intelligent agent is a reference to the stages of development, component selection techniques, and performance measurement of an intelligent agent.


2019 ◽  
pp. 1134-1143
Author(s):  
Deepshikha Bhargava

Over decades new technologies, algorithms and methods are evolved and proposed. We can witness a paradigm shift from typewriters to computers, mechanics to mechnotronics, physics to aerodynamics, chemistry to computational chemistry and so on. Such advancements are the result of continuing research; which is still a driving force of researchers. In the same way, the research in the field of artificial intelligence (Russell, Stuart & Norvig, 2003) is major thrust area of researchers. Research in AI have coined different concepts like natural language processing, expert systems, software agents, learning, knowledge management, robotics to name a few. The objective of this chapter is to highlight the research path from software agents to robotics. This chapter begins with the introduction of software agents. The chapter further progresses with the discussion on intelligent agent, autonomous agents, autonomous robots, intelligent robots in different sections. The chapter finally concluded with the fine line between intelligent agents and autonomous robots.


2012 ◽  
pp. 1225-1233
Author(s):  
Christos N. Moridis ◽  
Anastasios A. Economides

During recent decades there has been an extensive progress towards several Artificial Intelligence (AI) concepts, such as that of intelligent agent. Meanwhile, it has been established that emotions play a crucial role concerning human reasoning and learning. Thus, developing an intelligent agent able to recognize and express emotions has been considered an enormous challenge for AI researchers. Embedding a computational model of emotions in intelligent agents can be beneficial in a variety of domains, including e-learning applications. However, until recently emotional aspects of human learning were not taken into account when designing e-learning platforms. Various issues arise when considering the development of affective agents in e-learning environments, such as issues relating to agents’ appearance, as well as ways for those agents to recognize learners’ emotions and express emotional support. Embodied conversational agents (ECAs) with empathetic behaviour have been suggested to be one effective way for those agents to provide emotional feedback to learners’ emotions. There has been some valuable research towards this direction, but a lot of work still needs to be done to advance scientific knowledge.


Author(s):  
Murugan Sethuraman Sethuraman

AI has been defined in different ways, including the abilities for abstract thought, understanding, communication, reasoning, learning, retaining, planning, and solving. Intelligence is most widely studied in humans, but has also been observed in animals and plants. AI is the intelligence of machines or the simulation of intelligence in machines. AI is both the intelligence of machines and the branch of Computer Science which aims to create it, through the study and design of intelligent agents or rational agents, where an intelligent agent is a system that perceives its environment and takes actions which maximize its chances of success. Achievements include constrained and well-defined problems such as games, crossword-solving and optical character recognition. Among the traits that researchers hope machines will exhibit are reasoning, knowledge, planning, learning, communication, perception, and the ability to move and manipulate objects. In the field of AI there is no consensus on how closely the brain should be simulated.


Author(s):  
Grzegorz Musiolik

Artificial intelligence evolves rapidly and will have a great impact on the society in the future. One important question which still cannot be addressed with satisfaction is whether the decision of an intelligent agent can be predicted. As a consequence of this, the general question arises if such agents can be controllable and future robotic applications can be safe. This chapter shows that unpredictable systems are very common in mathematics and physics although the underlying mathematical structure can be very simple. It also shows that such unpredictability can also emerge for intelligent agents in reinforcement learning, especially for complex tasks with various input parameters. An observer would not be capable to distinguish this unpredictability from a free will of the agent. This raises ethical questions and safety issues which are briefly presented.


Sign in / Sign up

Export Citation Format

Share Document