Associative classification with evolutionary autonomous agents

Author(s):  
Li Zhao ◽  
Duwu Cui ◽  
Lei Wang
2017 ◽  
Author(s):  
Eugenia Isabel Gorlin ◽  
Michael W. Otto

To live well in the present, we take direction from the past. Yet, individuals may engage in a variety of behaviors that distort their past and current circumstances, reducing the likelihood of adaptive problem solving and decision making. In this article, we attend to self-deception as one such class of behaviors. Drawing upon research showing both the maladaptive consequences and self-perpetuating nature of self-deception, we propose that self-deception is an understudied risk and maintaining factor for psychopathology, and we introduce a “cognitive-integrity”-based approach that may hold promise for increasing the reach and effectiveness of our existing therapeutic interventions. Pending empirical validation of this theoretically-informed approach, we posit that patients may become more informed and autonomous agents in their own therapeutic growth by becoming more honest with themselves.


1999 ◽  
pp. 38-65 ◽  
Author(s):  
Margaret Morrison
Keyword(s):  

2021 ◽  
Vol 10 (2) ◽  
pp. 27
Author(s):  
Roberto Casadei ◽  
Gianluca Aguzzi ◽  
Mirko Viroli

Research and technology developments on autonomous agents and autonomic computing promote a vision of artificial systems that are able to resiliently manage themselves and autonomously deal with issues at runtime in dynamic environments. Indeed, autonomy can be leveraged to unburden humans from mundane tasks (cf. driving and autonomous vehicles), from the risk of operating in unknown or perilous environments (cf. rescue scenarios), or to support timely decision-making in complex settings (cf. data-centre operations). Beyond the results that individual autonomous agents can carry out, a further opportunity lies in the collaboration of multiple agents or robots. Emerging macro-paradigms provide an approach to programming whole collectives towards global goals. Aggregate computing is one such paradigm, formally grounded in a calculus of computational fields enabling functional composition of collective behaviours that could be proved, under certain technical conditions, to be self-stabilising. In this work, we address the concept of collective autonomy, i.e., the form of autonomy that applies at the level of a group of individuals. As a contribution, we define an agent control architecture for aggregate multi-agent systems, discuss how the aggregate computing framework relates to both individual and collective autonomy, and show how it can be used to program collective autonomous behaviour. We exemplify the concepts through a simulated case study, and outline a research roadmap towards reliable aggregate autonomy.


Author(s):  
László Bernáth

AbstractIt is an increasingly popular view among philosophers that moral responsibility can, in principle, be attributed to unconscious autonomous agents. This trend is already remarkable in itself, but it is even more interesting that most proponents of this view provide more or less the same argument to support their position. I argue that as it stands, the Extension Argument, as I call it, is not sufficient to establish the thesis that unconscious autonomous agents can be morally responsible. I attempt to show that the Extension Argument should overcome especially strong ethical considerations; moreover, its epistemological grounds are not too solid, partly because the justifications of its premises are in conflict.


Sign in / Sign up

Export Citation Format

Share Document