goal reasoning
Recently Published Documents


TOTAL DOCUMENTS

21
(FIVE YEARS 2)

H-INDEX

4
(FIVE YEARS 0)

Author(s):  
Till Hofmann ◽  
Tarik Viehmann ◽  
Mostafa Gomaa ◽  
Daniel Habering ◽  
Tim Niemueller ◽  
...  
Keyword(s):  


2021 ◽  
Vol 97 ◽  
pp. 104091
Author(s):  
Hadrien Bride ◽  
Jin Song Dong ◽  
Ryan Green ◽  
Zhé Hóu ◽  
Brendan Mahony ◽  
...  


Author(s):  
Héctor Muñoz-Avila ◽  
Dustin Dannenhauer ◽  
Noah Reifsnyder

In part motivated by topics such as agency safety, there is an increasing interest in goal reasoning, a form of agency where the agents formulate their own goals. One of the crucial aspects of goal reasoning agents is their ability to detect if the execution of their courses of actions meet their own expectations. We present a taxonomy of different forms of expectations as used by goal reasoning agents when monitoring their own execution. We summarize and contrast the current understanding of how to define and check expectations based on different knowledge sources used. We also identify gaps in our understanding of expectations.



Author(s):  
Okan Topçu ◽  
Levent Yilmaz

Simulating battle management is an essential technique used in planning and mission rehearsal as well as training. Simulation development costs tend to be high due to the complexity of cognitive system architectures in such applications. Due to this complexity, it takes significant effort for a simulation developer to comprehend the problem domain enough to capture accurately in a simulation code. Domain-specific languages (DSL) play an important role in narrowing the communication gap between the domain user and the developer and hence facilitate rapid development. In command and control (C2) applications, the coalition battle management language (C-BML) serves as a DSL for exchanging battle information among C2 systems, simulations, and autonomous elements. In this article, we use a rapid prototyping framework for cognitive agents and demonstrate deployment of agent systems by adopting the model driven engineering approach. To this end, we extend the use of C-BML and automatically transform it in a cognitive agent model, which is then used for adaptive decision making at runtime. As a result, during a simulation run, it is possible to initialize and modify an agent’s goal reasoning model. The cognitive agent models are based on the deliberative coherence theory, which provides a goal reasoning system in terms of coherence-driven agents.



Author(s):  
Till Hofmann ◽  
Nicolas Limpert ◽  
Victor Mataré ◽  
Alexander Ferrein ◽  
Gerhard Lakemeyer
Keyword(s):  


AI Magazine ◽  
2018 ◽  
Vol 39 (2) ◽  
pp. 3-24 ◽  
Author(s):  
David W. Aha

Goal reasoning (GR) has a bright future as a foundation for the research and development of intelligent agents. GR is the study of agents that can deliberate on and self-select their goals/objectives, which is a desirable capability for some applications of deliberative autonomy. While studied in diverse AI sub-communities for multiple applications, our group has focused on how GR can play a key role for controlling autonomous systems. Thus, its importance is rapidly growing and it merits increased attention, particularly from the perspective of research on AI safety. In this article, I introduce GR, briefly relate it to other AI topics, summarize some of our group’s work on GR foundations and emerging applications, and describe some current and future research directions.



2018 ◽  
Vol 31 (2) ◽  
pp. 115-116 ◽  
Author(s):  
Mark Roberts ◽  
Daniel Borrajo ◽  
Michael Cox ◽  
Neil Yorke-Smith
Keyword(s):  


2018 ◽  
Vol 31 (2) ◽  
pp. 181-195 ◽  
Author(s):  
Justin Karneeb ◽  
Michael W. Floyd ◽  
Philip Moore ◽  
David W. Aha


2018 ◽  
Vol 31 (2) ◽  
pp. 151-166
Author(s):  
Mark A. Wilson ◽  
James McMahon ◽  
Artur Wolek ◽  
David W. Aha ◽  
Brian H. Houston


Sign in / Sign up

Export Citation Format

Share Document