scholarly journals Can a Robot Lie? Exploring the Folk Concept of Lying as Applied to Artificial Agents

2021 ◽  
Vol 45 (10) ◽  
Author(s):  
Markus Kneer
1999 ◽  
Vol 38 (03) ◽  
pp. 154-157
Author(s):  
W. Fierz ◽  
R. Grütter

AbstractWhen dealing with biological organisms, one has to take into account some peculiarities which significantly affect the representation of knowledge about them. These are complemented by the limitations in the representation of propositional knowledge, i. e. the majority of clinical knowledge, by artificial agents. Thus, the opportunities to automate the management of clinical knowledge are widely restricted to closed contexts and to procedural knowledge. Therefore, in dynamic and complex real-world settings such as health care provision to HIV-infected patients human and artificial agents must collaborate in order to optimize the time/quality antinomy of services provided. If applied to the implementation level, the overall requirement ensues that the language used to model clinical contexts should be both human- and machine-interpretable. The eXtensible Markup Language (XML), which is used to develop an electronic study form, is evaluated against this requirement, and its contribution to collaboration of human and artificial agents in the management of clinical knowledge is analyzed.


Author(s):  
Jens Claßen ◽  
James Delgrande

With the advent of artificial agents in everyday life, it is important that these agents are guided by social norms and moral guidelines. Notions of obligation, permission, and the like have traditionally been studied in the field of Deontic Logic, where deontic assertions generally refer to what an agent should or should not do; that is they refer to actions. In Artificial Intelligence, the Situation Calculus is (arguably) the best known and most studied formalism for reasoning about action and change. In this paper, we integrate these two areas by incorporating deontic notions into Situation Calculus theories. We do this by considering deontic assertions as constraints, expressed as a set of conditionals, which apply to complex actions expressed as GOLOG programs. These constraints induce a ranking of "ideality" over possible future situations. This ranking in turn is used to guide an agent in its planning deliberation, towards a course of action that adheres best to the deontic constraints. We present a formalization that includes a wide class of (dyadic) deontic assertions, lets us distinguish prima facie from all-things-considered obligations, and particularly addresses contrary-to-duty scenarios. We furthermore present results on compiling the deontic constraints directly into the Situation Calculus action theory, so as to obtain an agent that respects the given norms, but works solely based on the standard reasoning and planning techniques.


2020 ◽  
Author(s):  
Michael Laakasuo ◽  
Anton Berg ◽  
Jukka Sundvall ◽  
Marianna Drosinou ◽  
Volo Herzon ◽  
...  

In this chapter, we will provide theoretical background of discussion on issues related to AIs. Some of the main topics, theories and frameworks are mind perception and moral cognition, moral psychology, evolutionary psychology, trans-humanism and ontological categories shaped by evolution.


2020 ◽  
Author(s):  
Agnieszka Wykowska ◽  
Jairo Pérez-Osorio ◽  
Stefan Kopp

This booklet is a collection of the position statements accepted for the HRI’20 conference workshop “Social Cognition for HRI: Exploring the relationship between mindreading and social attunement in human-robot interaction” (Wykowska, Perez-Osorio & Kopp, 2020). Unfortunately, due to the rapid unfolding of the novel coronavirus at the beginning of the present year, the conference and consequently our workshop, were canceled. On the light of these events, we decided to put together the positions statements accepted for the workshop. The contributions collected in these pages highlight the role of attribution of mental states to artificial agents in human-robot interaction, and precisely the quality and presence of social attunement mechanisms that are known to make human interaction smooth, efficient, and robust. These papers also accentuate the importance of the multidisciplinary approach to advance the understanding of the factors and the consequences of social interactions with artificial agents.


Author(s):  
Alistair M. C. Isaac ◽  
Will Bridewell

It is easy to see that social robots will need the ability to detect and evaluate deceptive speech; otherwise they will be vulnerable to manipulation by malevolent humans. More surprisingly, we argue that effective social robots must also be able to produce deceptive speech. Many forms of technically deceptive speech perform a positive pro-social function, and the social integration of artificial agents will be possible only if they participate in this market of constructive deceit. We demonstrate that a crucial condition for detecting and producing deceptive speech is possession of a theory of mind. Furthermore, strategic reasoning about deception requires identifying a type of goal distinguished by its priority over the norms of conversation, which we call an ulterior motive. We argue that this goal is the appropriate target for ethical evaluation, not the veridicality of speech per se. Consequently, deception-capable robots are compatible with the most prominent programs to ensure that robots behave ethically.


2021 ◽  
Vol 12 (1) ◽  
pp. 310-335
Author(s):  
Selmer Bringsjord ◽  
Naveen Sundar Govindarajulu ◽  
Michael Giancola

Abstract Suppose an artificial agent a adj {a}_{\text{adj}} , as time unfolds, (i) receives from multiple artificial agents (which may, in turn, themselves have received from yet other such agents…) propositional content, and (ii) must solve an ethical problem on the basis of what it has received. How should a adj {a}_{\text{adj}} adjudicate what it has received in order to produce such a solution? We consider an environment infused with logicist artificial agents a 1 , a 2 , … , a n {a}_{1},{a}_{2},\ldots ,{a}_{n} that sense and report their findings to “adjudicator” agents who must solve ethical problems. (Many if not most of these agents may be robots.) In such an environment, inconsistency is a virtual guarantee: a adj {a}_{\text{adj}} may, for instance, receive a report from a 1 {a}_{1} that proposition ϕ \phi holds, then from a 2 {a}_{2} that ¬ ϕ \neg \phi holds, and then from a 3 {a}_{3} that neither ϕ \phi nor ¬ ϕ \neg \phi should be believed, but rather ψ \psi instead, at some level of likelihood. We further assume that agents receiving such incompatible reports will nonetheless sometimes simply need, before long, to make decisions on the basis of these reports, in order to try to solve ethical problems. We provide a solution to such a quandary: AI capable of adjudicating competing reports from subsidiary agents through time, and delivering to humans a rational, ethically correct (relative to underlying ethical principles) recommendation based upon such adjudication. To illuminate our solution, we anchor it to a particular scenario.


iScience ◽  
2021 ◽  
pp. 102340
Author(s):  
Matthew I. Jones ◽  
Scott D. Pauls ◽  
Feng Fu
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document