Predictability of AI Decisions

Author(s):  
Grzegorz Musiolik

Artificial intelligence evolves rapidly and will have a great impact on the society in the future. One important question which still cannot be addressed with satisfaction is whether the decision of an intelligent agent can be predicted. As a consequence of this, the general question arises if such agents can be controllable and future robotic applications can be safe. This chapter shows that unpredictable systems are very common in mathematics and physics although the underlying mathematical structure can be very simple. It also shows that such unpredictability can also emerge for intelligent agents in reinforcement learning, especially for complex tasks with various input parameters. An observer would not be capable to distinguish this unpredictability from a free will of the agent. This raises ethical questions and safety issues which are briefly presented.

2012 ◽  
pp. 1225-1233
Author(s):  
Christos N. Moridis ◽  
Anastasios A. Economides

During recent decades there has been an extensive progress towards several Artificial Intelligence (AI) concepts, such as that of intelligent agent. Meanwhile, it has been established that emotions play a crucial role concerning human reasoning and learning. Thus, developing an intelligent agent able to recognize and express emotions has been considered an enormous challenge for AI researchers. Embedding a computational model of emotions in intelligent agents can be beneficial in a variety of domains, including e-learning applications. However, until recently emotional aspects of human learning were not taken into account when designing e-learning platforms. Various issues arise when considering the development of affective agents in e-learning environments, such as issues relating to agents’ appearance, as well as ways for those agents to recognize learners’ emotions and express emotional support. Embodied conversational agents (ECAs) with empathetic behaviour have been suggested to be one effective way for those agents to provide emotional feedback to learners’ emotions. There has been some valuable research towards this direction, but a lot of work still needs to be done to advance scientific knowledge.


2018 ◽  
Author(s):  
Juarez Monteiro ◽  
Roger Granada ◽  
Rafael C. Pinto ◽  
Rodrigo C. Barros

Artificial Intelligence (AI) seeks to bring intelligent behavior for machines by using specific techniques. These techniques can be employed in order to solve tasks, such as planning paths or controlling intelligent agents. Some tasks that use AI techniques are not trivially testable, since it can handle a high number of variables depending on their complexity. As digital games can provide a wide range of variables, they become an efficient and economical means for testing artificial intelligence techniques. In this paper, we propose a combination of a behavior tree and a Pathfinding algorithm to solve a maze-based problem using the digital game Bomberman of the Nintendo Entertainment System (NES) platform. We perform an analysis of the AI techniques in order to verify the feasibility of future experiments in similar complex environments. Our experiments show that our intelligent agent can be successfully implemented using the proposed approach.


Author(s):  
Vedang Naik ◽  
◽  
Rohit Sahoo ◽  
Sameer Mahajan ◽  
Saurabh Singh ◽  
...  

Reinforcement learning is an artificial intelligence paradigm that enables intelligent agents to accrue environmental incentives to get superior results. It is concerned with sequential decision-making problems which offer limited feedback. Reinforcement learning has roots in cybernetics and research in statistics, psychology, neurology, and computer science. It has piqued the interest of the machine learning and artificial intelligence groups in the last five to ten years. It promises that it allows you to train agents using rewards and penalties without explaining how the task will be completed. The RL issue may be described as an agent that must make decisions in a given environment to maximize a specified concept of cumulative rewards. The learner is not taught which actions to perform but must experiment to determine which acts provide the greatest reward. Thus, the learner has to actively choose between exploring its environment or exploiting it based on its knowledge. The exploration-exploitation paradox is one of the most common issues encountered while dealing with Reinforcement Learning algorithms. Deep reinforcement learning is the combination of reinforcement learning (RL) and deep learning. We describe how to utilize several deep reinforcement learning (RL) algorithms for managing a Cartpole system used to represent episodic environments and Stock Market Trading, which is used to describe continuous environments in this study. We explain and demonstrate the effects of different RL ideas such as Deep Q Networks (DQN), Double DQN, and Dueling DQN on learning performance. We also look at the fundamental distinctions between episodic and continuous activities and how the exploration-exploitation issue is addressed in their context.


2019 ◽  
Author(s):  
Mehran Moradi Spitmaan ◽  
Amanda Caterina Leong

Considering the concept of intelligent agents in system theory, we aim to expand the definitions of intelligence and intelligent act through creating a new theoretical framework and investigating its applications; in order to open new perspectives both in artificial intelligence and literary studies. Assuming a temporally unidirectional environment, our theory describes an intelligent agent as one who aims to perform intelligent acts. The intelligent act is an ongoing attempt made by an agent to predict the future of the self as well as the future of the part or whole of the entire environment. However, the outcome is always uncertain. Our theory categorizes human behavior into intelligent and non-intelligent actions. Applying this categorization on literary texts enables us to continuously portray characters as intelligent or non-intelligent entities based on their actions. This approach helps redefine the way we perceive agency in characters’ progression. By investigating depictions of uncertainties in characters, we consider characters as entities who are able to rupture conventions governing the need for character coherence and narrative closure in literary texts. Therefore, we hope that readers will be able to accept stories that are not only at odds with the characters themselves but also with the readers.


2019 ◽  
Vol 3 (2) ◽  
pp. 34
Author(s):  
Hiroshi Yamakawa

In a human society with emergent technology, the destructive actions of some pose a danger to the survival of all of humankind, increasing the need to maintain peace by overcoming universal conflicts. However, human society has not yet achieved complete global peacekeeping. Fortunately, a new possibility for peacekeeping among human societies using the appropriate interventions of an advanced system will be available in the near future. To achieve this goal, an artificial intelligence (AI) system must operate continuously and stably (condition 1) and have an intervention method for maintaining peace among human societies based on a common value (condition 2). However, as a premise, it is necessary to have a minimum common value upon which all of human society can agree (condition 3). In this study, an AI system to achieve condition 1 was investigated. This system was designed as a group of distributed intelligent agents (IAs) to ensure robust and rapid operation. Even if common goals are shared among all IAs, each autonomous IA acts on each local value to adapt quickly to each environment that it faces. Thus, conflicts between IAs are inevitable, and this situation sometimes interferes with the achievement of commonly shared goals. Even so, they can maintain peace within their own societies if all the dispersed IAs think that all other IAs aim for socially acceptable goals. However, communication channel problems, comprehension problems, and computational complexity problems are barriers to realization. This problem can be overcome by introducing an appropriate goal-management system in the case of computer-based IAs. Then, an IA society could achieve its goals peacefully, efficiently, and consistently. Therefore, condition 1 will be achievable. In contrast, humans are restricted by their biological nature and tend to interact with others similar to themselves, so the eradication of conflicts is more difficult.


2014 ◽  
Vol 571-572 ◽  
pp. 105-108
Author(s):  
Lin Xu

This paper proposes a new framework of combining reinforcement learning with cloud computing digital library. Unified self-learning algorithms, which includes reinforcement learning, artificial intelligence and etc, have led to many essential advances. Given the current status of highly-available models, analysts urgently desire the deployment of write-ahead logging. In this paper we examine how DNS can be applied to the investigation of superblocks, and introduce the reinforcement learning to improve the quality of current cloud computing digital library. The experimental results show that the method works more efficiency.


Sign in / Sign up

Export Citation Format

Share Document