Reasoning about Success and Failure in Intentional Agents

Author(s):  
Timothy William Cleaver ◽  
Abdul Sattar ◽  
Kewen Wang
Keyword(s):  
2020 ◽  
Author(s):  
Lorijn Zaadnoordijk ◽  
Tim Bayne

As human adults, we experience ourselves as intentional agents. Here, we address how intentional agency and the corresponding agentive experiences emerge in infancy. When formulating a developmental theory of intentional agency, we encounter a so-called paradox of agency: three plausible theses regarding intentional agency that in combination seem to make it impossible for the developing infant to acquire a sense of agency. By recognizing various types of intentions, we propose a framework in which the paradox can be resolved, allowing infants to bootstrap their way to becoming intentional agents and experiencing a sense of agency.


Author(s):  
Elizabeth Schechter

This chapter defends the 2-agents claim, according to which the two hemispheres of a split-brain subject are associated with distinct intentional agents. The empirical basis of this claim is that, while both hemispheres are the source or site of intentions, the capacity to integrate them in practical reasoning no longer operates interhemispherically after split-brain surgery. As a result, the right hemisphere-associated agent, R, and the left hemisphere-associated agent, L, enjoy intentional autonomy from each other. Although the positive case for the 2-agents claim is grounded mainly in experimental findings, the claim is not contradicted by what we know of split-brain subjects’ ordinary behavior, that is, the way they act outside of experimental conditions.


Author(s):  
Christian List

AbstractThe aim of this exploratory paper is to review an under-appreciated parallel between group agency and artificial intelligence. As both phenomena involve non-human goal-directed agents that can make a difference to the social world, they raise some similar moral and regulatory challenges, which require us to rethink some of our anthropocentric moral assumptions. Are humans always responsible for those entities’ actions, or could the entities bear responsibility themselves? Could the entities engage in normative reasoning? Could they even have rights and a moral status? I will tentatively defend the (increasingly widely held) view that, under certain conditions, artificial intelligent systems, like corporate entities, might qualify as responsible moral agents and as holders of limited rights and legal personhood. I will further suggest that regulators should permit the use of autonomous artificial systems in high-stakes settings only if they are engineered to function as moral (not just intentional) agents and/or there is some liability-transfer arrangement in place. I will finally raise the possibility that if artificial systems ever became phenomenally conscious, there might be a case for extending a stronger moral status to them, but argue that, as of now, this remains very hypothetical.


2010 ◽  
Vol 39 ◽  
pp. 217-268 ◽  
Author(s):  
M. O. Riedl ◽  
R. M. Young

Narrative, and in particular storytelling, is an important part of the human experience. Consequently, computational systems that can reason about narrative can be more effective communicators, entertainers, educators, and trainers. One of the central challenges in computational narrative reasoning is narrative generation, the automated creation of meaningful event sequences. There are many factors -- logical and aesthetic -- that contribute to the success of a narrative artifact. Central to this success is its understandability. We argue that the following two attributes of narratives are universal: (a) the logical causal progression of plot, and (b) character believability. Character believability is the perception by the audience that the actions performed by characters do not negatively impact the audience's suspension of disbelief. Specifically, characters must be perceived by the audience to be intentional agents. In this article, we explore the use of refinement search as a technique for solving the narrative generation problem -- to find a sound and believable sequence of character actions that transforms an initial world state into a world state in which goal propositions hold. We describe a novel refinement search planning algorithm -- the Intent-based Partial Order Causal Link (IPOCL) planner -- that, in addition to creating causally sound plot progression, reasons about character intentionality by identifying possible character goals that explain their actions and creating plan structures that explain why those characters commit to their goals. We present the results of an empirical evaluation that demonstrates that narrative plans generated by the IPOCL algorithm support audience comprehension of character intentions better than plans generated by conventional partial-order planners.


2020 ◽  
Author(s):  
Johannes Schultz ◽  
Chris D. Frith

To survive, all animals need to predict what other agents are going to do next. The first step is to detect that an object is an agent and, if so, how sophisticated it is. To this end, visual cues are especially important: the form of the agent and the nature of its movements. Once identified, the movements of an agent, however sophisticated, can be anticipated in the short term on the basis of purely physical constraints, but, in the longer term, it is useful to take account of the agent’s goals and intentions. Goal directed agents are marked by the rationality of their movements, reaching their goals by the shortest or least effortful path. Observing goal directed behaviour activates the brain’s action observation/mirror neuron network. The observer’s own action generating mechanism has an important role in predicting future movements of goal directed agents.Intentions have a critical role in determining actions when agents interact with other agents. In such interactions, movements can become communicative rather than directed to immediate goals. Also, each agent can be trying to predict the behaviour of the other, leading to a recursive arms race. It is difficult to infer intentional behaviour from movement kinematics and interpretation is much more dependent upon prior beliefs about the agent. When people believe that they are interacting with an intentional agent, the brain’s mentalising system is activated as the person tries to assess the degree of sophistication of the agent. Several biologically-constrained computational models of action recognition are available, but equivalent models for understanding intentional agents remain to be developed.


2015 ◽  
Vol 30 (2) ◽  
pp. 117-139 ◽  
Author(s):  
Frank C. Keil ◽  
George E. Newman
Keyword(s):  

Author(s):  
Emma Borg

There is a sense in which it is trivial to say that one accepts intention- (or convention-)based semantics. For if what is meant by this claim is simply that there is an important respect in which words and sentences have meaning (either at all or the particular meanings that they have in any given natural language) due to the fact that they are used, in the way they are, by intentional agents (i.e. speakers), then it seems no one should disagree. For imagine a possible world where there are physical things which share the shape and form of words of English or Japanese, or the acoustic properties of sentences of Finnish or Arapaho, yet where there are no intentional agents (or where any remaining intentional agents don't use language). In such a world, it seems clear that these physical objects, which are only superficially language-like, will lack all meaning.


Sign in / Sign up

Export Citation Format

Share Document