Intelligent Agents and Autonomous Robots

Author(s):  
Deepshikha Bhargava

Over decades new technologies, algorithms and methods are evolved and proposed. We can witness a paradigm shift from typewriters to computers, mechanics to mechnotronics, physics to aerodynamics, chemistry to computational chemistry and so on. Such advancements are the result of continuing research; which is still a driving force of researchers. In the same way, the research in the field of artificial intelligence (Russell, Stuart & Norvig, 2003) is major thrust area of researchers. Research in AI have coined different concepts like natural language processing, expert systems, software agents, learning, knowledge management, robotics to name a few. The objective of this chapter is to highlight the research path from software agents to robotics. This chapter begins with the introduction of software agents. The chapter further progresses with the discussion on intelligent agent, autonomous agents, autonomous robots, intelligent robots in different sections. The chapter finally concluded with the fine line between intelligent agents and autonomous robots.

2019 ◽  
pp. 1134-1143
Author(s):  
Deepshikha Bhargava

Over decades new technologies, algorithms and methods are evolved and proposed. We can witness a paradigm shift from typewriters to computers, mechanics to mechnotronics, physics to aerodynamics, chemistry to computational chemistry and so on. Such advancements are the result of continuing research; which is still a driving force of researchers. In the same way, the research in the field of artificial intelligence (Russell, Stuart & Norvig, 2003) is major thrust area of researchers. Research in AI have coined different concepts like natural language processing, expert systems, software agents, learning, knowledge management, robotics to name a few. The objective of this chapter is to highlight the research path from software agents to robotics. This chapter begins with the introduction of software agents. The chapter further progresses with the discussion on intelligent agent, autonomous agents, autonomous robots, intelligent robots in different sections. The chapter finally concluded with the fine line between intelligent agents and autonomous robots.


Author(s):  
Mahesh S. Raisinghani

One of the most discussed topics in the information systems literature today is software agent/intelligent agent technology. Software agents are high-level software abstractions with inherent capabilities for communication, decision making, control, and autonomy. They are programs that perform functions such as information gathering, information filtering, or mediation (running in the background) on behalf of a person or entity. They have several aliases such as agents, bots, chatterbots, databots, intellibots, and intelligent software agents/robots. They provide a powerful mechanism to address complex software engineering problems such as abstraction, encapsulation, modularity, reusability, concurrency, and distributed operations. Much research has been devoted to this topic, and more and more new software products billed as having intelligent agent functionality are being introduced on the market every day. The research that is being done, however, does not wholeheartedly endorse this trend. The current research into intelligent agent software technology can be divided into two main areas: technological and social. The latter area is particularly important since, in the excitement of new and emergent technology, people often forget to examine what impact the new technology will have on people’s lives. In fact, the social dimension of all technology is the driving force and most important consideration of technology itself. This chapter presents a socio-technical perspective on intelligent agents and proposes a framework based on the data lifecycle and knowledge discovery using intelligent agents. One of the key ideas of this chapter is best stated by Peter F. Drucker in Management Challenges for the 21st Century when he suggests that in this period of profound social and economic changes, managers should focus on the meaning of information, not the technology that collects it.


2011 ◽  
pp. 104-112 ◽  
Author(s):  
Mahesh S. Raisinghani ◽  
Christopher Klassen ◽  
Lawrence L. Schkade

Although there is no firm consensus on what constitutes an intelligent agent (or software agent), an intelligent agent, when a new task is delegated by the user, should determine precisely what its goal is, evaluate how the goal can be reached in an effective manner, and perform the necessary actions by learning from past experience and responding to unforeseen situations with its adaptive, self-starting, and temporal continuous reasoning strategies. It needs to be not only cooperative and mobile in order to perform its tasks by interacting with other agents but also reactive and autonomous to sense the status quo and act independently to make progress towards its goals (Baek et al., 1999; Wang, 1999). Software agents are goal-directed and possess abilities such as autonomy, collaborative behavior, and inferential capability. Intelligent agents can take different forms, but an intelligent agent can initiate and make decisions without human intervention and have the capability to infer appropriate high-level goals from user actions and requests and take actions to achieve these goals (Huang, 1999; Nardi et al., 1998; Wang, 1999). The intelligent software agent is a computational entity than can adapt to the environment, making it capable of interacting with other agents and transporting itself across different systems in a network.


Author(s):  
Robert Finkelstein

Primitive autonomous robotic vehicles were first developed during World War I and deployed during wars throughout the 20th century. More recently, autonomous vehicles with cognition were enabled by new technologies, especially in sensors, processors, and software, along with advances in humanoid and other legged robots. The autonomous car, a transformative and disruptive technology, will lead over the coming decades to the development of ubiquitous autonomous robots with increasing levels of cognition for many different applications. These robots will fill nearly every economic sector and occupational niche. The potential impact of autonomous intelligent robots on society, by the end of the century, will lead to ethical and moral dilemmas, as well as impacting jobs across the globe. There is a need to provide alternative sources of income, or alternative employment, for unemployed humans. There is also a need to consider the consequence of the possible emergence of robot self-awareness, consciousness, and free will.


The study touches on the impact of the presence of convinced opinion for some of intelligent agents during the voting on formation the final collective opinion in a community G consists of autonomous agents, every intelligent agent have his opinion 0 or 1 On a thoughtful topic, between these agents happening an interaction[1], by entering into a collective debate[2], this interaction we will call 'voting', especially in the community, when the number of elements that have opinion 0, convergent compared with the number of elements that have different opinion 1 during the voting where this state of society is close to an unstable equilibrium point, this state will change during voting process, to close to one of two stability points 0 or 1 according to a previous study, presence convinced opinion intelligent agents, would later lead to a chaotic change in the final result of the voting[3], this chaotic change is called the “Butterfly Effect”[4,5], this influence can occur in many fields, market movements and economics, social life, mathematical problems, technological topics, movement of a driverless or ;


1998 ◽  
Vol 13 (2) ◽  
pp. 143-146 ◽  
Author(s):  
GEORGE A. BEKEY

Autonomous robots are the intelligent agents par excellence. We frequently define a robot as a machine that senses, thinks and acts, i.e., an agent. They are distinguished from software agents in that robots are embodied agents, situated in the real world. As such, they are subject both to the joys and sorrows of the world. They can be touched and seen and heard (sometimes even smelled!), they have physical dimensions, and they can exert force on other objects. These objects can be like a ball in the RoboCup or Mirosot robot soccer games, they can be parts to be assembled, airplanes to be washed, carpets to be vacuumed, terrain to be traversed or cameras to be aimed. On the other hand, since robots are agents in the world they are also subject to its physical laws, they have mass and inertia, their moving parts encounter friction and hence heat, no two parts are precisely alike, measurements are corrupted by noise, and alas, parts break. Of course, robots also contain computers, and hence they are also subject to the slings and arrows of computer misfortunes, both in hardware and software. Finally, the world into which we place these robots keeps changing, it is non-stationary and unstructured, so that we cannot predict its features accurately in advance.


Author(s):  
Jana Polgar

Agents are viewed as the next significant software abstraction, and it is expected they will become as ubiquitous as graphical user interfaces are today. Agents are specialized programs designed to provide services to their users. Multiagent systems have a key capability to reallocate tasks among the members, which may result in significant savings and improvements in many domains, such as resource allocation, scheduling, e-commerce, and so forth. In the near future, agents will roam the Internet, selling and buying information and services. These agents will evolve from their present day form - simple carriers of transactions - to efficient decision makers. It is envisaged that the decisionmaking processes and interactions between agents will be very fast (Kephart, 1998). The importance of automated negotiation systems is increasing with the emergence of new technologies supporting faster reasoning engines and mobile code. A central part of agent systems is a sophisticated reasoning engine that enables the agents to reallocate their tasks, optimize outcomes, and negotiate with other agents. The negotiation strategy used by the reasoning engine also requires high-level inter-agent communication protocols, and suitable collaboration strategies. Both of these sub-systems – a reasoning engine and a negotiation strategy - typically result in complicated agent designs and implementations that are difficult to maintain. Activities of a set of autonomous agents have to be coordinated. Some could be mobile agents, while others are static intelligent agents. We usually aim at decentralized coordination, which produces the desired outcomes with minimal communication. Many different types of contract protocols (cluster, swaps, and multiagent, as examples) and negotiation strategies are used. The evaluation of outcomes is often based on marginal cost (Sandholm, 1993) or game theory payoffs (Mass-Colell, 1995). Agents based on constraint technology use complex search algorithms to solve optimization problems arising from the agents’ interaction. In particular, coordination and negotiation strategies in the presence of incomplete knowledge are good candidates for constraint-based implementations.


Author(s):  
Claus Hoffmann ◽  
Pascal Linden ◽  
Maria-Esther Vidal

This paper presents ARTEMIS, a control system for autonomous robots or software agents. ARTEMIS can create human-like artificial emotions during interactions with their environment. We describe the underlying mechanisms for this. The control system also captures its past artificial emotions. A specific interpretation of a knowledge graph, called an Agent Knowledge Graph, stores these artificial emotions. ARTEMIS then utilizes current and stored emotions to adapt decision making and planning processes. As proof of concept, we realize a concrete software agent based on the ARTEMIS control system. This software agent acts as a user assistant and executes their orders and instructions. The environment of this user assistant consists of several other autonomous agents that offer their services. The execution of a user’s orders requires interactions of the user assistant with these autonomous service agents. These interactions lead to the creation of artificial emotions within the user assistant. The first experiments show that it is possible to realize an autonomous user assistant with plausible artificial emotions with ARTEMIS and record these artificial emotions in its Agent Knowledge Graph. The results also show that captured emotions support successful planning and decision making in complex dynamic environments. The user assistant with emotions surpasses an emotionless version of the user assistant.


2020 ◽  
Vol 34 (4) ◽  
pp. 143-164
Author(s):  
Peter C. Kipp ◽  
Mary B. Curtis ◽  
Ziyin Li

SYNOPSIS Advances in IT suggest that computerized intelligent agents (IAs) may soon occupy many roles that presently employ human agents. A significant concern is the ethical conduct of those who use IAs, including their possible utilization by managers to engage in earnings management. We investigate how financial reporting decisions are affected when they are supported by the work of an IA versus a human agent, with varying autonomy. In an experiment with experienced managers, we vary agent type (human versus IA) and autonomy (more versus less), finding that managers engage in less aggressive financial reporting decisions with IAs than with human agents, and engage in less aggressive reporting decisions with less autonomous agents than with more autonomous agents. Managers' perception of control over their agent and ability to diffuse their own responsibility for financial reporting decisions explain the effect of agent type and autonomy on managers' financial reporting decisions.


2019 ◽  
Vol 10 (1) ◽  
pp. 24-45
Author(s):  
Samuel Allen Alexander

Abstract Legg and Hutter, as well as subsequent authors, considered intelligent agents through the lens of interaction with reward-giving environments, attempting to assign numeric intelligence measures to such agents, with the guiding principle that a more intelligent agent should gain higher rewards from environments in some aggregate sense. In this paper, we consider a related question: rather than measure numeric intelligence of one Legg-Hutter agent, how can we compare the relative intelligence of two Legg-Hutter agents? We propose an elegant answer based on the following insight: we can view Legg-Hutter agents as candidates in an election, whose voters are environments, letting each environment vote (via its rewards) which agent (if either) is more intelligent. This leads to an abstract family of comparators simple enough that we can prove some structural theorems about them. It is an open question whether these structural theorems apply to more practical intelligence measures.


Sign in / Sign up

Export Citation Format

Share Document