moral agency
Recently Published Documents


TOTAL DOCUMENTS

859
(FIVE YEARS 219)

H-INDEX

28
(FIVE YEARS 2)

2021 ◽  
pp. 000313482110651
Author(s):  
Allan Peetz ◽  
Marie Kuzemchak ◽  
Catherine Hammack ◽  
Oscar D Guillamondegui ◽  
Bradley M, Dennis ◽  
...  

Background Trauma surgeons face a challenge when deciding whether to resuscitate lethally injured patients whose organ donor status is unknown. Data suggests practice pattern variability in this setting, but little is known about why. Materials and Methods We conducted semi-structured interviews with trauma surgeons practicing in Level 1 or 2 trauma centers in Tennessee. Interviews focused on ethical dilemmas and resource constraints. Analysis was performed using inductive thematic analysis. Results Response rate was 73% (11/15). Four key themes emerged. All described resuscitating patients to buy time to collect more definitive clinical information and to identify family. Some acknowledged this served the secondary purpose of organ preservation. 11/11 participants felt a primacy of obligation to the patient in front of them even after it became apparent, they could not personally benefit. For 9/11 (82%), the moral obligation to consider organ preservation was secondary/balancing; 2/11 (18%) felt it was irrelevant/immoral. Resource allocation was commonly considered. All participants expressed some limitation to resources they would allocate. All participants conveyed clear moral agency in determining resuscitation extent when the goal was to save the patient’s life, however this was less clear when resuscitating for organ preservation. Across themes, perceptions of a “standard practice” existed but the described practices were not consistent across interviewees. Discussion Widely ranging perceptions regarding ethical and resource considerations underlie practices resuscitating toward organ preservation. Common themes suggest a lack of consensus. Despite expressed beliefs, there is no identifiable standard of practice amongst trauma surgeons resuscitating in this setting.


2021 ◽  
Author(s):  
Richard M. Lerner ◽  
Marc H. Bornstein ◽  
Pamela Jervis

Positive character involves a system of mutually beneficial relations between individual and context that coherently vary across ontogenetic time and enable individuals to engage the social world as moral agents. We present ideas about the development of positive character attributes using three constructs associated with relational developmental systems (RDS) metatheory: the specificity of mutually beneficial individual<-->context dynamics across time and place; holistic integration of dynamic processes of an individual with both context and all cognitive, affective, and behavioral processes; and integration of the character-system with other facets of the self-system. These features of RDS-based ideas coalesce on the embodiment of positive character development. We discuss the need for more robust interrogation of embodied features of the character development system by suggesting that coaction of morphological/physiological processes with cultural processes become part of a program of the integrated individual<-->contextual processes involved in the description, explanation, and optimization of positive character attribute development. We discuss moments of programmatic research that should be involved in this interrogation and point to the potential contribution of theory-predicated research about the embodied development of positive character attributes of to enhancing the presence of moral agency and social justice in the world.


2021 ◽  
Vol 8 ◽  
Author(s):  
Dane Leigh Gogoshin

It is almost a foregone conclusion that robots cannot be morally responsible agents, both because they lack traditional features of moral agency like consciousness, intentionality, or empathy and because of the apparent senselessness of holding them accountable. Moreover, although some theorists include them in the moral community as moral patients, on the Strawsonian picture of moral community as requiring moral responsibility, robots are typically excluded from membership. By looking closely at our actual moral responsibility practices, however, I determine that the agency reflected and cultivated by them is limited to the kind of moral agency of which some robots are capable, not the philosophically demanding sort behind the traditional view. Hence, moral rule-abiding robots (if feasible) can be sufficiently morally responsible and thus moral community members, despite certain deficits. Alternative accountability structures could address these deficits, which I argue ought to be in place for those existing moral community members who share these deficits.


2021 ◽  
pp. 1470594X2110526
Author(s):  
Anne-Sofie Greisen Hojlund

Many find that the objectionable nature of paternalism has something to do with belief. However, since it is commonly held that beliefs are directly governed by epistemic as opposed to moral norms, how could it be objectionable to hold paternalistic beliefs about others if they are supported by the evidence? Drawing on central elements of relational egalitarianism, this paper attempts to bridge this gap. In a first step, it argues that holding paternalistic beliefs about others implies a failure to regard them as equals in terms of their moral agency. In a second step, it shows that the fact that we should regard others as equals in this sense raises the threshold for sufficiency of evidence for paternalistic beliefs to be epistemically justified. That is, moral reasons of relational equality encroach on the epistemic. However, these reasons are not decisive. In cases where others are about to jeopardize critical goods such as their lives, mobility or future autonomy, relational equality sometimes calls for paternalistic action and, by extension, the formation of beliefs that render such action rational. The upshot is that in order to meet demands of relational equality we have a pro tanto reason to not hold paternalistic beliefs about others.


2021 ◽  
pp. 446-465
Author(s):  
Samuel E. Balentine

Can there be moral agency without autonomy? Absent the freedom to deliberate, make a choice, and enact a decision, does the covenantal relationship described in Jeremiah 31 understand fidelity to God to be anything more than involuntary obedience? Put differently, if both the covenantal requirements and the decision to obey them are externally inscribed on the human heart, if like computer software they are “programmed” into the operating system, do humans automatically surrender their freedom for thinking about moral decisions? This chapter examines the language of moral selfhood (both divine and human) in Jeremiah, with special attention to trauma theory as a hermeneutical lens for thinking about the “wounding of the mind” wrought by the experience of exile.


2021 ◽  
pp. 50-70
Author(s):  
Barbara Herman

This chapter explores the imperfect duty of non-negligence or due care. It is a complex secondary duty that regulates the performance of primary duties. Its norms of attention and execution are responsive to a primary duty’s interpreted value. Due care often requires motivational capacities that track moral value across complex circumstances of action—a claim inconsistent with a dictum that duties cannot impose requirements that depend on motive. Middle Work 3 argues that the dictum depends on a rejectable view of motive, one modeled on a modular account of simple desires. The idea of a system motive is introduced as an affective organization that make an agent responsive to a region of value. This makes a moral motive an agential response to moral value and moral agency a motive-involved competence. We can then have a motive-involved duty without having a duty to have a motive.


Author(s):  
Barbara Herman

The Moral Habitat is a book in three parts that begins with an investigation of three understudied imperfect duties which together offer some important and challenging insights about moral requirements and moral agency: that our duties only make sense as a system; that actions can be morally wrong to do and yet not be impermissible; and that there are motive-dependent duties. In Part Two, these insights are used to launch a substantial reinterpretation of Kant’s ethics as a system of duties, juridical and ethical, perfect and imperfect, that can incorporate what we learn from imperfect duties and do much more. The system of duties provides the structure for what I call a moral habitat: a made environment, created by and for free and equal persons living together. It is a dynamic system, with duties from the juridical and ethical spheres shaping and being affected by each other, each level further interpreting the system’s core anti-subordination value initiated in Kant’s account of innate right. The structure of an imperfect duty is exhibited in a detailed account of the duty of beneficence, including its latitude of application and demandingness. Part Three takes up some implications and applications of the moral habitat idea. Its topics range from the adjustments to the system that would come with recognizing a human right to housing to meta-ethical issues about objectivity and our responsibility for moral change. The upshot is a transformative, holistic agent- and institution-centered, account of Kantian morality.


Problemos ◽  
2021 ◽  
Vol 100 ◽  
pp. 139-151
Author(s):  
Riya Manna ◽  
Rajakishore Nath

This paper discusses the philosophical issues pertaining to Kantian moral agency and artificial intelligence (AI). Here, our objective is to offer a comprehensive analysis of Kantian ethics to elucidate the non-feasibility of Kantian machines. Meanwhile, the possibility of Kantian machines seems to contend with the genuine human Kantian agency. We argue that in machine morality, ‘duty’ should be performed with ‘freedom of will’ and ‘happiness’ because Kant narrated the human tendency of evaluating our ‘natural necessity’ through ‘happiness’ as the end. Lastly, we argue that the Kantian ‘freedom of will’ and ‘faculty of choice’ do not belong to any deterministic model of ‘agency’ as these are sacrosanct systems. The conclusion narrates the non-feasibility of Kantian AI agents from the genuine Kantian ethical outset, offering a utility-based Kantian ethical performer instead.


Sign in / Sign up

Export Citation Format

Share Document