autonomous agent
Recently Published Documents


TOTAL DOCUMENTS

341
(FIVE YEARS 69)

H-INDEX

17
(FIVE YEARS 3)

2021 ◽  
Vol 18 (4(Suppl.)) ◽  
pp. 1350
Author(s):  
Tho Nguyen Duc ◽  
Chanh Minh Tran ◽  
Phan Xuan Tan ◽  
Eiji Kamioka

Imitation learning is an effective method for training an autonomous agent to accomplish a task by imitating expert behaviors in their demonstrations. However, traditional imitation learning methods require a large number of expert demonstrations in order to learn a complex behavior. Such a disadvantage has limited the potential of imitation learning in complex tasks where the expert demonstrations are not sufficient. In order to address the problem, we propose a Generative Adversarial Network-based model which is designed to learn optimal policies using only a single demonstration. The proposed model is evaluated on two simulated tasks in comparison with other methods. The results show that our proposed model is capable of completing considered tasks despite the limitation in the number of expert demonstrations, which clearly indicate the potential of our model.


2021 ◽  
Vol 10 ◽  
pp. 41-57
Author(s):  
Valentyna Yunchyk ◽  
◽  
Natalia Kunanets ◽  
Volodymyr Pasichnyk ◽  
Anatolii Fedoniuk ◽  
...  

The key terms and basic concepts of the agent are analyzed. The structured general classification of agents according to the representation of the model of the external environment, by the type of processing information and by the functions performed is given. The classification of artificial agents (intellectual, reflex, impulsive, trophic) also is s analyzed. The necessary conditions for the implementation of a certain behavior by the agent are given, as well as the scheme of functioning of the intelligent agent. The levels of knowledge that play a key role in the architecture of the agent are indicated. The functional diagram of a learning agent that works relatively independently, demonstrating flexible behavior. It is discussed that the functional scheme of the reactive agent determines the dependence on the environment. The properties of the intelligent agent are described in detail and the block diagram is indicated. Various variants of agent architectures, in particular neural network agent architectures, are considered. The organization of level interaction in the multilevel agent architecture is proposed. Considerable attention is paid to the Will-architecture and InteRRaP- architecture of agents. A multilevel architecture for an autonomous agent of a Turing machine is considered.


Author(s):  
Menglong Yang ◽  
Katashi Nagao

The aim of this paper is to digitize the environments in which humans live, at low cost, and reconstruct highly accurate three-dimensional environments that are based on those in the real world. This three-dimensional content can be used such as for virtual reality environments and three-dimensional maps for automatic driving systems. In general, however, a three-dimensional environment must be carefully reconstructed by manually moving the sensors used to first scan the real environment on which the three-dimensional one is based. This is done so that every corner of an entire area can be measured, but time and costs increase as the area expands. Therefore, a system that creates three-dimensional content that is based on real-world large-scale buildings at low cost is proposed. This involves automatically scanning the indoors with a mobile robot that uses low-cost sensors and generating 3D point clouds. When the robot reaches an appropriate measurement position, it collects the three-dimensional data of shapes observable from that position by using a 3D sensor and 360-degree panoramic camera. The problem of determining an appropriate measurement position is called the “next best view problem,” and it is difficult to solve in a complicated indoor environment. To deal with this problem, a deep reinforcement learning method is employed. It combines reinforcement learning, with which an autonomous agent learns strategies for selecting behavior, and deep learning done using a neural network. As a result, 3D point cloud data can be generated with better quality than the conventional rule-based approach.


2021 ◽  
Author(s):  
◽  
Heidi Newton

<p>The thesis addresses the problem of creating an autonomous agent that is able to learn about and use meaningful hand motor actions in a simulated world with realistic physics, in a similar way to human infants learning to control their hand. A recent thesis by Mugan presented one approach to this problem using qualitative representations, but suffered from several important limitations. This thesis presents an alternative design that breaks the learning problem down into several distinct learning tasks. It presents a new method for learning rules about actions based on the Apriori algorithm. It also presents a planner inspired by infants that can use these rules to solve a range of tasks. Experiments showed that the agent was able to learn meaningful rules and was then able to successfully use them to achieve a range of simple planning tasks.</p>


2021 ◽  
Author(s):  
◽  
Heidi Newton

<p>The thesis addresses the problem of creating an autonomous agent that is able to learn about and use meaningful hand motor actions in a simulated world with realistic physics, in a similar way to human infants learning to control their hand. A recent thesis by Mugan presented one approach to this problem using qualitative representations, but suffered from several important limitations. This thesis presents an alternative design that breaks the learning problem down into several distinct learning tasks. It presents a new method for learning rules about actions based on the Apriori algorithm. It also presents a planner inspired by infants that can use these rules to solve a range of tasks. Experiments showed that the agent was able to learn meaningful rules and was then able to successfully use them to achieve a range of simple planning tasks.</p>


2021 ◽  
Author(s):  
Lysa Gramoli ◽  
Jeremy Lacoche ◽  
Anthony Foulonneau ◽  
Valerie Gouranton ◽  
Bruno Arnaldi
Keyword(s):  

Author(s):  
Eduardo C. Garrido-Mercháin ◽  
Martín Molina ◽  
Francisco M. Mendoza-Soto

This work seeks to study the beneficial properties that an autonomous agent can obtain by imitating a cognitive architecture similar to that of conscious beings. Throughout this document, a cognitive model of an autonomous agent-based in a global workspace architecture is presented. We hypothesize that consciousness is an evolutionary advantage, so if our autonomous agent can be potentially conscious, its performance will be enhanced. We explore whether an autonomous agent implementing a cognitive architecture like the one proposed in the global workspace theory can be conscious from a philosophy of mind perspective, with a special emphasis on functionalism and multiple realizability. The purposes of our proposed model are to create autonomous agents that can navigate within an environment composed of multiple independent magnitudes, adapting to its surroundings to find the best possible position according to its inner preferences and to test the effectiveness of many of its cognitive mechanisms, such as an attention mechanism for magnitude selection, possession of inner feelings and preferences, usage of a memory system to storage beliefs and past experiences, and incorporating the consciousness bottleneck into the decision-making process, that controls and integrates information processed by all the subsystems of the model, as in global workspace theory. We show in a large experiment set how potentially conscious autonomous agents can benefit from having a cognitive architecture such as the one described.


Author(s):  
Craig J. Johnson ◽  
Mustafa Demir ◽  
Nathan J. McNeese ◽  
Jamie C. Gorman ◽  
Alexandra T. Wolff ◽  
...  

Objective This work examines two human–autonomy team (HAT) training approaches that target communication and trust calibration to improve team effectiveness under degraded conditions. Background Human–autonomy teaming presents challenges to teamwork, some of which may be addressed through training. Factors vital to HAT performance include communication and calibrated trust. Method Thirty teams of three, including one confederate acting as an autonomous agent, received either entrainment-based coordination training, trust calibration training, or control training before executing a series of missions operating a simulated remotely piloted aircraft. Automation and autonomy failures simulating degraded conditions were injected during missions, and measures of team communication, trust, and task efficiency were collected. Results Teams receiving coordination training had higher communication anticipation ratios, took photos of targets faster, and overcame more autonomy failures. Although autonomy failures were introduced in all conditions, teams receiving the calibration training reported that their overall trust in the agent was more robust over time. However, they did not perform better than the control condition. Conclusions Training based on entrainment of communications, wherein introduction of timely information exchange through one team member has lasting effects throughout the team, was positively associated with improvements in HAT communications and performance under degraded conditions. Training that emphasized the shortcomings of the autonomous agent appeared to calibrate expectations and maintain trust. Applications Team training that includes an autonomous agent that models effective information exchange may positively impact team communication and coordination. Training that emphasizes the limitations of an autonomous agent may help calibrate trust.


Sign in / Sign up

Export Citation Format

Share Document