Creating and Capturing Artificial Emotions in Autonomous Robots and Software Agents

Author(s):  
Claus Hoffmann ◽  
Maria-Esther Vidal
Author(s):  
Claus Hoffmann ◽  
Pascal Linden ◽  
Maria-Esther Vidal

This paper presents ARTEMIS, a control system for autonomous robots or software agents. ARTEMIS can create human-like artificial emotions during interactions with their environment. We describe the underlying mechanisms for this. The control system also captures its past artificial emotions. A specific interpretation of a knowledge graph, called an Agent Knowledge Graph, stores these artificial emotions. ARTEMIS then utilizes current and stored emotions to adapt decision making and planning processes. As proof of concept, we realize a concrete software agent based on the ARTEMIS control system. This software agent acts as a user assistant and executes their orders and instructions. The environment of this user assistant consists of several other autonomous agents that offer their services. The execution of a user’s orders requires interactions of the user assistant with these autonomous service agents. These interactions lead to the creation of artificial emotions within the user assistant. The first experiments show that it is possible to realize an autonomous user assistant with plausible artificial emotions with ARTEMIS and record these artificial emotions in its Agent Knowledge Graph. The results also show that captured emotions support successful planning and decision making in complex dynamic environments. The user assistant with emotions surpasses an emotionless version of the user assistant.


2019 ◽  
pp. 1134-1143
Author(s):  
Deepshikha Bhargava

Over decades new technologies, algorithms and methods are evolved and proposed. We can witness a paradigm shift from typewriters to computers, mechanics to mechnotronics, physics to aerodynamics, chemistry to computational chemistry and so on. Such advancements are the result of continuing research; which is still a driving force of researchers. In the same way, the research in the field of artificial intelligence (Russell, Stuart & Norvig, 2003) is major thrust area of researchers. Research in AI have coined different concepts like natural language processing, expert systems, software agents, learning, knowledge management, robotics to name a few. The objective of this chapter is to highlight the research path from software agents to robotics. This chapter begins with the introduction of software agents. The chapter further progresses with the discussion on intelligent agent, autonomous agents, autonomous robots, intelligent robots in different sections. The chapter finally concluded with the fine line between intelligent agents and autonomous robots.


Author(s):  
Deepshikha Bhargava

Over decades new technologies, algorithms and methods are evolved and proposed. We can witness a paradigm shift from typewriters to computers, mechanics to mechnotronics, physics to aerodynamics, chemistry to computational chemistry and so on. Such advancements are the result of continuing research; which is still a driving force of researchers. In the same way, the research in the field of artificial intelligence (Russell, Stuart & Norvig, 2003) is major thrust area of researchers. Research in AI have coined different concepts like natural language processing, expert systems, software agents, learning, knowledge management, robotics to name a few. The objective of this chapter is to highlight the research path from software agents to robotics. This chapter begins with the introduction of software agents. The chapter further progresses with the discussion on intelligent agent, autonomous agents, autonomous robots, intelligent robots in different sections. The chapter finally concluded with the fine line between intelligent agents and autonomous robots.


1998 ◽  
Vol 13 (2) ◽  
pp. 143-146 ◽  
Author(s):  
GEORGE A. BEKEY

Autonomous robots are the intelligent agents par excellence. We frequently define a robot as a machine that senses, thinks and acts, i.e., an agent. They are distinguished from software agents in that robots are embodied agents, situated in the real world. As such, they are subject both to the joys and sorrows of the world. They can be touched and seen and heard (sometimes even smelled!), they have physical dimensions, and they can exert force on other objects. These objects can be like a ball in the RoboCup or Mirosot robot soccer games, they can be parts to be assembled, airplanes to be washed, carpets to be vacuumed, terrain to be traversed or cameras to be aimed. On the other hand, since robots are agents in the world they are also subject to its physical laws, they have mass and inertia, their moving parts encounter friction and hence heat, no two parts are precisely alike, measurements are corrupted by noise, and alas, parts break. Of course, robots also contain computers, and hence they are also subject to the slings and arrows of computer misfortunes, both in hardware and software. Finally, the world into which we place these robots keeps changing, it is non-stationary and unstructured, so that we cannot predict its features accurately in advance.


Author(s):  
Ram Gopal Gupta ◽  
Bireshwar Dass Mazumdar ◽  
Kuldeep Yadav

The rapidly changing needs and opportunities of today’s global software market require unprecedented levels of code comprehension to integrate diverse information systems to share knowledge and collaborate among organizations. The combination of code comprehension with software agents not only provides a promising computing paradigm for efficient agent mediated code comprehension service for selection and integration of inter-organizational business processes but this combination also raises certain cognitive issues that need to be addressed. We will review some of the key cognitive models and theories of code comprehension that have emerged in software code comprehension. This paper will propose a cognitive model which will bring forth cognitive challenges, if handled properly by the organization would help in leveraging software design and dependencies.


Author(s):  
PAUL A. BOXER

Autonomous robots are unsuccessful at operating in complex, unconstrained environments. They lack the ability to learn about the physical behavior of different objects through the use of vision. We combine Bayesian networks and qualitative spatial representation to learn general physical behavior by visual observation. We input training scenarios that allow the system to observe and learn normal physical behavior. The position and velocity of the visible objects are represented as qualitative states. Transitions between these states over time are entered as evidence into a Bayesian network. The network provides probabilities of future transitions to produce predictions of future physical behavior. We use test scenarios to determine how well the approach discriminates between normal and abnormal physical behavior and actively predicts future behavior. We examine the ability of the system to learn three naive physical concepts, "no action at a distance", "solidity" and "movement on continuous paths". We conclude that the combination of qualitative spatial representations and Bayesian network techniques is capable of learning these three rules of naive physics.


Author(s):  
Stamatis Karnouskos

AbstractThe rapid advances in Artificial Intelligence and Robotics will have a profound impact on society as they will interfere with the people and their interactions. Intelligent autonomous robots, independent if they are humanoid/anthropomorphic or not, will have a physical presence, make autonomous decisions, and interact with all stakeholders in the society, in yet unforeseen manners. The symbiosis with such sophisticated robots may lead to a fundamental civilizational shift, with far-reaching effects as philosophical, legal, and societal questions on consciousness, citizenship, rights, and legal entity of robots are raised. The aim of this work is to understand the broad scope of potential issues pertaining to law and society through the investigation of the interplay of law, robots, and society via different angles such as law, social, economic, gender, and ethical perspectives. The results make it evident that in an era of symbiosis with intelligent autonomous robots, the law systems, as well as society, are not prepared for their prevalence. Therefore, it is now the time to start a multi-disciplinary stakeholder discussion and derive the necessary policies, frameworks, and roadmaps for the most eminent issues.


2021 ◽  
Vol 10 (3) ◽  
pp. 1-31
Author(s):  
Zhao Han ◽  
Daniel Giger ◽  
Jordan Allspaw ◽  
Michael S. Lee ◽  
Henny Admoni ◽  
...  

As autonomous robots continue to be deployed near people, robots need to be able to explain their actions. In this article, we focus on organizing and representing complex tasks in a way that makes them readily explainable. Many actions consist of sub-actions, each of which may have several sub-actions of their own, and the robot must be able to represent these complex actions before it can explain them. To generate explanations for robot behavior, we propose using Behavior Trees (BTs), which are a powerful and rich tool for robot task specification and execution. However, for BTs to be used for robot explanations, their free-form, static structure must be adapted. In this work, we add structure to previously free-form BTs by framing them as a set of semantic sets {goal, subgoals, steps, actions} and subsequently build explanation generation algorithms that answer questions seeking causal information about robot behavior. We make BTs less static with an algorithm that inserts a subgoal that satisfies all dependencies. We evaluate our BTs for robot explanation generation in two domains: a kitting task to assemble a gearbox, and a taxi simulation. Code for the behavior trees (in XML) and all the algorithms is available at github.com/uml-robotics/robot-explanation-BTs.


Sign in / Sign up

Export Citation Format

Share Document