scholarly journals Nanotechnology in aquaculture: Applications, perspectives and regulatory challenges

Author(s):  
Carlos Fajardo ◽  
Gonzalo Martinez-Rodriguez ◽  
Julian Blasco ◽  
Juan Miguel Mancera ◽  
Bolaji Thomas ◽  
...  
2011 ◽  
Vol 97 (1) ◽  
pp. 10-15 ◽  
Author(s):  
Gretchen P. Kenagy ◽  
Barbara S. Schneidman ◽  
Barbara Barzansky ◽  
Claudette E. Dalton ◽  
Carl A. Sirio ◽  
...  

ABSTRACT Physician reentry to clinical practice is fast becoming recognized as an issue of central importance in discussions about the physician workforce. While there are few empirical studies, existing data show that increasing numbers of physicians take a leave of absence from practice at some point during their careers; this trend is expected to continue. The process of returning to clinical practice is coming under scrutiny due to the public's increasing demand for transparency regarding physician competence. Criteria for medical licensure often do not include an expectation of ongoing clinical activity. Physicians who maintain a license but do not practice for a period of time, therefore, may be reentering the workforce with unknown competency to practice. This paper: (1) presents survey data on current physician reentry policies of state medical boards; (2) discusses the findings from the survey within the context of regulatory challenges that impact physician-reentry; and (3) offers recommendations to facilitate the development of comprehensive, coordinated regulatory policies on physician reentry.


Author(s):  
Christian List

AbstractThe aim of this exploratory paper is to review an under-appreciated parallel between group agency and artificial intelligence. As both phenomena involve non-human goal-directed agents that can make a difference to the social world, they raise some similar moral and regulatory challenges, which require us to rethink some of our anthropocentric moral assumptions. Are humans always responsible for those entities’ actions, or could the entities bear responsibility themselves? Could the entities engage in normative reasoning? Could they even have rights and a moral status? I will tentatively defend the (increasingly widely held) view that, under certain conditions, artificial intelligent systems, like corporate entities, might qualify as responsible moral agents and as holders of limited rights and legal personhood. I will further suggest that regulators should permit the use of autonomous artificial systems in high-stakes settings only if they are engineered to function as moral (not just intentional) agents and/or there is some liability-transfer arrangement in place. I will finally raise the possibility that if artificial systems ever became phenomenally conscious, there might be a case for extending a stronger moral status to them, but argue that, as of now, this remains very hypothetical.


Sign in / Sign up

Export Citation Format

Share Document