Removal of kidneys from living donors: Technical or ethical problem?

1997 ◽  
Vol 29 (5) ◽  
pp. 2429-2430 ◽  
Author(s):  
M. Chebil ◽  
H. Loussaief ◽  
M. Hajri ◽  
K. Kellou ◽  
M.R.Ben Slama ◽  
...  
2011 ◽  
Vol 2011 ◽  
pp. 1-10 ◽  
Author(s):  
Carlo Petrini

A sound evaluation of every bioethical problem should be predicated on a careful analysis of at least two basic elements: (i) reliable scientific information and (ii) the ethical principles and values at stake. A thorough evaluation of both elements also calls for a careful examination of statements by authoritative institutions. Unfortunately, in the case of medically complex living donors neither element gives clear-cut answers to the ethical problems raised. Likewise, institutionary documents frequently offer only general criteria, which are not very helpful when making practical choices. This paper first introduces a brief overview of scientific information, ethical values, and institutionary documents; the notions of “acceptable risk” and “minimal risk” are then briefly examined, with reference to the problem of medically complex living donors. The so-called precautionary principle and the value of solidarity are then discussed as offering a possible approach to the ethical problem of medically complex living donors.


2021 ◽  
Vol 12 (1) ◽  
pp. 310-335
Author(s):  
Selmer Bringsjord ◽  
Naveen Sundar Govindarajulu ◽  
Michael Giancola

Abstract Suppose an artificial agent a adj {a}_{\text{adj}} , as time unfolds, (i) receives from multiple artificial agents (which may, in turn, themselves have received from yet other such agents…) propositional content, and (ii) must solve an ethical problem on the basis of what it has received. How should a adj {a}_{\text{adj}} adjudicate what it has received in order to produce such a solution? We consider an environment infused with logicist artificial agents a 1 , a 2 , … , a n {a}_{1},{a}_{2},\ldots ,{a}_{n} that sense and report their findings to “adjudicator” agents who must solve ethical problems. (Many if not most of these agents may be robots.) In such an environment, inconsistency is a virtual guarantee: a adj {a}_{\text{adj}} may, for instance, receive a report from a 1 {a}_{1} that proposition ϕ \phi holds, then from a 2 {a}_{2} that ¬ ϕ \neg \phi holds, and then from a 3 {a}_{3} that neither ϕ \phi nor ¬ ϕ \neg \phi should be believed, but rather ψ \psi instead, at some level of likelihood. We further assume that agents receiving such incompatible reports will nonetheless sometimes simply need, before long, to make decisions on the basis of these reports, in order to try to solve ethical problems. We provide a solution to such a quandary: AI capable of adjudicating competing reports from subsidiary agents through time, and delivering to humans a rational, ethically correct (relative to underlying ethical principles) recommendation based upon such adjudication. To illuminate our solution, we anchor it to a particular scenario.


2021 ◽  
Vol 21 (4) ◽  
pp. 110-112
Author(s):  
Philip Ghobrial ◽  
Sanjeev Akkina ◽  
Emily E. Anderson

2020 ◽  
Vol 84 ◽  
pp. 147-153
Author(s):  
Kosei Takagi ◽  
Yuzo Umeda ◽  
Ryuichi Yoshida ◽  
Nobuyuki Watanabe ◽  
Takashi Kuise ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document