Agent Technology

Author(s):  
J.J. Ch. Meyer

Agent technology is a rapidly growing subdiscipline of computer science on the borderline of artificial intelligence and software engineering that studies the construction of intelligent systems. It is centered around the concept of an (intelligent/rational/autonomous) agent. An agent is a software entity that displays some degree of autonomy; it performs actions in its environment on behalf of its user but in a relatively independent way, taking initiatives to perform actions on its own by deliberating its options to achieve its goal(s). The field of agent technology emerged out of philosophical considerations about how to reason about courses of action, and human action, in particular. In analytical philosophy there is an area occupied with so-called practical reasoning, in which one studies so-called practical syllogisms, that constitute patterns of inference regarding actions. By way of an example, a practical syllogism may have the following form (Audi, 1999, p. 728): Would that I exercise. Jogging is exercise. Therefore, I shall go jogging. Although this has the form of a deductive syllogism in the familiar Aristotelian tradition of “theoretical reasoning,” on closer inspection it appears that this syllogism does not express a purely logical deduction. (The conclusion does not follow logically from the premises.) It rather constitutes a representation of a decision of the agent (going to jog), where this decision is based on mental attitudes of the agent, namely, his/her beliefs (“jogging is exercise”) and his/her desires or goals (“would that I exercise”). So, practical reasoning is “reasoning directed toward action—the process of figuring out what to do,” as Wooldridge (2000, p. 21) puts it. The process of reasoning about what to do next on the basis of mental states such as beliefs and desires is called deliberation (see Figure 1). The philosopher Michael Bratman has argued that humans (and more generally, resource-bounded agents) also use the notion of an intention when deliberating their next action (Bratman, 1987). An intention is a desire that the agent is committed to and will try to fulfill till it believes it has achieved it or has some other rational reason to abandon it. Thus, we could say that agents, given their beliefs and desires, choose some desire as their intention, and “go for it.” This philosophical theory has been formalized through several studies, in particular the work of Cohen and Levesque (1990); Rao and Georgeff (1991); and Van der Hoek, Van Linder, and Meyer (1998), and has led to the so-called Belief- Desire-Intention (BDI) model of intelligent or rational agents (Rao & Georgeff, 1991). Since the beginning of the 1990s researchers have turned to the problem of realizing artificial agents. We will return to this hereafter.

Author(s):  
Jonathan Dancy

This book offers a theory of practical reasoning which is Aristotelian in spirit, since it maintains that one can reason to action in very much the same ways as those in which one can reason to belief. But the book gives its own, non-Aristotelian account of what those ways are; the practical syllogism hardly appears at all. Instead, there are accounts of reasons as considerations favouring a certain response, and of other ways in which considerations can be relevant to that response. Practical reasoning involves the attempt to see how the different relevant considerations come together to favour responding in a certain way (understood here as the attempt to determine the practical shape of the situation) and in acting in that way, in the light of those considerations. The only difference between this and theoretical reasoning is that in the latter, the relevant response is a belief rather than an action. The ‘therefore’ that is involved on both sides is a ‘for these reasons’ sort of therefore. The book also shows how the account offered can make good sense of moral reasoning and of the special forms of practical reasoning that are instrumental.


2013 ◽  
Vol 43 (3) ◽  
pp. 303-321 ◽  
Author(s):  
Eric Wiland

If practical reasoning deserves its name, its form must be different from that of ordinary (theoretical) reasoning. A few have thought that the conclusion of practical reasoning is an action, rather than a mental state. I argue here that if the conclusion is an action, then so too is one of the premises. You might reason your way from doing one thing to doing another: from browsing journal abstracts to reading a particular journal article. I motivate this by sympathetically re-examining Hume's claim that a conclusion about what ought to be done follows only from an argument one of whose premises is likewise about what ought to be done.


Author(s):  
Christopher Evan Franklin

This chapter explains the differences between agency reductionism and nonreductionism, explains the varieties of libertarianism, and sets out the main contours of minimal event-causal libertarianism, highlighting just how minimal this theory is. Crucial to understanding how minimal event-causal libertarianism differs from other event-causal libertarian theories is understanding the location and role of indeterminism in human action, the kinds of mental states essential to causing free action, the nature of nondeterministic causation, and how the theory is constructed from compatibilist accounts. The chapter argues that libertarians must face up to both the problem of luck and the problem of enhanced control when determining the best theoretical location of indeterminism.


Author(s):  
Robert Audi

This book provides an overall theory of perception and an account of knowledge and justification concerning the physical, the abstract, and the normative. It has the rigor appropriate for professionals but explains its main points using concrete examples. It accounts for two important aspects of perception on which philosophers have said too little: its relevance to a priori knowledge—traditionally conceived as independent of perception—and its role in human action. Overall, the book provides a full-scale account of perception, presents a theory of the a priori, and explains how perception guides action. It also clarifies the relation between action and practical reasoning; the notion of rational action; and the relation between propositional and practical knowledge. Part One develops a theory of perception as experiential, representational, and causally connected with its objects: as a discriminative response to those objects, embodying phenomenally distinctive elements; and as yielding rich information that underlies human knowledge. Part Two presents a theory of self-evidence and the a priori. The theory is perceptualist in explicating the apprehension of a priori truths by articulating its parallels to perception. The theory unifies empirical and a priori knowledge by clarifying their reliable connections with their objects—connections many have thought impossible for a priori knowledge as about the abstract. Part Three explores how perception guides action; the relation between knowing how and knowing that; the nature of reasons for action; the role of inference in determining action; and the overall conditions for rational action.


Author(s):  
Jonathan Dancy

This chapter considers some general issues about the nature of the account that is emerging. It asks whether moral reasoning should have been treated as it was in Chapter 5. It also askes whether an explanation of practical reasons by appeal to value could be mirrored by a similar explanation of theoretical reasoning if one thinks of truth as a value. One might also think of the probability of a belief as a respect in which it is of value. The chapter ends by introducing the idea of a focalist account, and maintains that the account offered of practical reasoning is focalist.


Author(s):  
Salvador Rus Rufino ◽  
María Asunción Sánchez Manzano

<p>En el trabajo se hace un estudio del significado del problema del silogismo práctico en la Obra de Aristóteles, considerando lo siguiente:</p><p>a) Las teorías modernas sobre el tema. No se citan las obras, sólo en la nota primera se hace referencia a los autores que han estudiado la cuestión.</p><p>b) La interpretación que se puede hacer del silogismo práctico desde la perspectiva de los tratados de Filosofía Práctica más que desde el Organon o la Metafísica, que serían la fundamentación teórica del problema.</p><p>c) Se concluye que la lógica o la filosofía teorética es insuficiente o no es el único instrumento para analizar en profundidad el problema que se plantea.</p><p>There is very little reason to foist a special practical syllogism on Aristotle. Rather than drawing a contrast in the logic of reasoning, we try to draw out the parallels and the distinctions Aristotle's points to between the processes of theoretical and practical reasoning.</p><p>These processes are similar in that they begin with certain sentences which the mind puts together in some way such that some conclusion follows. In one case the conclusion of the process is a sentence which the mind affirms, in the other an action which the man commits unless he is prevented by some other consideration or by external necessity.</p><p>Our paper will also bring out features relevant to Aristotle's simile of deliberation being like analysis in geometry</p>


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Michail Pantoulias ◽  
Vasiliki Vergouli ◽  
Panagiotis Thanassas

Truth has always been a controversial subject in Aristotelian scholarship. In most cases, including some well-known passages in the Categories, De Interpretatione and Metaphysics, Aristotle uses the predicate ‘true’ for assertions, although exceptions are many and impossible to ignore. One of the most complicated cases is the concept of practical truth in the sixth book of Nicomachean Ethics: its entanglement with action and desire raises doubts about the possibility of its inclusion to the propositional model of truth. Nevertheless, in one of the most extensive studies on the subject, C. Olfert has tried to show that this is not only possible but also necessary. In this paper, we explain why trying to fit practical truth into the propositional model comes with insurmount­able problems. In order to overcome these problems, we focus on multiple aspects of practical syllogism and correlate them with Aristo­tle’s account of desire, happiness and the good. Identifying the role of such concepts in the specific steps of practical reasoning, we reach the conclusion that practical truth is best explained as the culmination of a well-executed practical syllogism taken as a whole, which ultimately explains why this type of syllogism demands a different approach and a different kind of truth than the theoretical one.


2008 ◽  
pp. 1360-1367
Author(s):  
Cesar Analide ◽  
Paulo Novais ◽  
José Machado ◽  
José Neves

The work done by some authors in the fields of computer science, artificial intelligence, and multi-agent systems foresees an approximation of these disciplines and those of the social sciences, namely, in the areas of anthropology, sociology, and psychology. Much of this work has been done in terms of the humanization of the behavior of virtual entities by expressing human-like feelings and emotions. Some authors (e.g., Ortony, Clore & Collins, 1988; Picard, 1997) suggest lines of action considering ways to assign emotions to machines. Attitudes like cooperation, competition, socialization, and trust are explored in many different areas (Arthur, 1994; Challet & Zhang, 1998; Novais et al., 2004). Other authors (e.g., Bazzan et al., 2000; Castelfranchi, Rosis & Falcone, 1997) recognize the importance of modeling virtual entity mental states in an anthropopathic way. Indeed, an important motivation to the development of this project comes from the author’s work with artificial intelligence in the area of knowledge representation and reasoning, in terms of an extension to the language of logic programming, that is, the Extended Logic Programming (Alferes, Pereira & Przymusinski, 1998; Neves, 1984). On the other hand, the use of null values to deal with imperfect knowledge (Gelfond, 1994; Traylor & Gelfond, 1993) and the enforcement of exceptions to characterize the behavior of intelligent systems (Analide, 2004) is another justification for the adoption of these formalisms in this knowledge arena. Knowledge representation, as a way to describe the real world based on mechanical, logical, or other means, will always be a function of the systems ability to describe the existent knowledge and their associated reasoning mechanisms. Indeed, in the conception of a knowledge representation system, it must be taken into attention different instances of knowledge.


Author(s):  
Benjamin Morison

The paper presses an analogy between Aristotle’s conception of practical reasoning and theoretical reasoning. It argues that theoretical reasoning has two optimal cognitive states associated with it, episteme and (theoretical) nous, and that practical reasoning has two counterpart states, phronēsis and (practical) nous. Theoretical nous is an expertise which enables those who have it to understand principles as principles, i.e. among other things, to know how to use them to derive other truths in their domain. It is a cognitively demanding state, which only experts have. Aristotelian practical nous is structurally similar to theoretical nous in that it requires the agent not only to know certain everyday truths, but also to know how and when to use them in deliberative reasoning. It is also a cognitively demanding notion, and only moral experts will have it.


Robotics ◽  
2019 ◽  
Vol 8 (2) ◽  
pp. 25
Author(s):  
Arturs Ardavs ◽  
Mara Pudane ◽  
Egons Lavendelis ◽  
Agris Nikitenko

This paper proposes a long-term adaptive distributed intelligent systems model which combines an organization theory and multi-agent paradigm—ViaBots. Currently, the need for adaptivity in autonomous intelligent systems becomes crucial due to the increase in the complexity and diversity of the tasks that autonomous robots are employed for. To deal with the design complexity of such systems within the ViaBots model, each part of the modeled system is designed as an autonomous agent and the entire model, as a multi-agent system. Based on the viable system model, which is widely used to ensure viability, (i.e., long-term autonomy of organizations), the ViaBots model defines the necessary roles a system must fulfill to be capable to adapt both to changes in its environment (like changes in the task) and changes within the system itself (like availability of a particular robot). Along with static role assignments, ViaBots propose a mechanism for role transition from one agent to another as one of the key elements of long term adaptivity. The model has been validated in a simulated environment using an example of a conveyor system. The simulated model enabled the multi-robot system to adapt to the quantity and characteristics of the available robots, as well as to the changes in the parts to be processed by the system.


Sign in / Sign up

Export Citation Format

Share Document