Models as autonomous agents

1999 ◽  
pp. 38-65 ◽  
Author(s):  
Margaret Morrison
Keyword(s):  
2017 ◽  
Author(s):  
Eugenia Isabel Gorlin ◽  
Michael W. Otto

To live well in the present, we take direction from the past. Yet, individuals may engage in a variety of behaviors that distort their past and current circumstances, reducing the likelihood of adaptive problem solving and decision making. In this article, we attend to self-deception as one such class of behaviors. Drawing upon research showing both the maladaptive consequences and self-perpetuating nature of self-deception, we propose that self-deception is an understudied risk and maintaining factor for psychopathology, and we introduce a “cognitive-integrity”-based approach that may hold promise for increasing the reach and effectiveness of our existing therapeutic interventions. Pending empirical validation of this theoretically-informed approach, we posit that patients may become more informed and autonomous agents in their own therapeutic growth by becoming more honest with themselves.


2021 ◽  
Vol 10 (2) ◽  
pp. 27
Author(s):  
Roberto Casadei ◽  
Gianluca Aguzzi ◽  
Mirko Viroli

Research and technology developments on autonomous agents and autonomic computing promote a vision of artificial systems that are able to resiliently manage themselves and autonomously deal with issues at runtime in dynamic environments. Indeed, autonomy can be leveraged to unburden humans from mundane tasks (cf. driving and autonomous vehicles), from the risk of operating in unknown or perilous environments (cf. rescue scenarios), or to support timely decision-making in complex settings (cf. data-centre operations). Beyond the results that individual autonomous agents can carry out, a further opportunity lies in the collaboration of multiple agents or robots. Emerging macro-paradigms provide an approach to programming whole collectives towards global goals. Aggregate computing is one such paradigm, formally grounded in a calculus of computational fields enabling functional composition of collective behaviours that could be proved, under certain technical conditions, to be self-stabilising. In this work, we address the concept of collective autonomy, i.e., the form of autonomy that applies at the level of a group of individuals. As a contribution, we define an agent control architecture for aggregate multi-agent systems, discuss how the aggregate computing framework relates to both individual and collective autonomy, and show how it can be used to program collective autonomous behaviour. We exemplify the concepts through a simulated case study, and outline a research roadmap towards reliable aggregate autonomy.


Author(s):  
László Bernáth

AbstractIt is an increasingly popular view among philosophers that moral responsibility can, in principle, be attributed to unconscious autonomous agents. This trend is already remarkable in itself, but it is even more interesting that most proponents of this view provide more or less the same argument to support their position. I argue that as it stands, the Extension Argument, as I call it, is not sufficient to establish the thesis that unconscious autonomous agents can be morally responsible. I attempt to show that the Extension Argument should overcome especially strong ethical considerations; moreover, its epistemological grounds are not too solid, partly because the justifications of its premises are in conflict.


Author(s):  
Sam Hepenstal ◽  
Leishi Zhang ◽  
Neesha Kodogoda ◽  
B.L. William Wong

Criminal investigations are guided by repetitive and time-consuming information retrieval tasks, often with high risk and high consequence. If Artificial intelligence (AI) systems can automate lines of inquiry, it could reduce the burden on analysts and allow them to focus their efforts on analysis. However, there is a critical need for algorithmic transparency to address ethical concerns. In this paper, we use data gathered from Cognitive Task Analysis (CTA) interviews of criminal intelligence analysts and perform a novel analysis method to elicit question networks. We show how these networks form an event tree, where events are consolidated by capturing analyst intentions. The event tree is simplified with a Dynamic Chain Event Graph (DCEG) that provides a foundation for transparent autonomous investigations.


2004 ◽  
Vol 49 (1-2) ◽  
pp. 113-122 ◽  
Author(s):  
Robert E. Wray ◽  
Sean A. Lisse ◽  
Jonathan T. Beard
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document