scholarly journals An Ethical Framework for the Design, Development, Implementation, and Assessment of Drones Used in Public Healthcare

2020 ◽  
Vol 26 (5) ◽  
pp. 2867-2891 ◽  
Author(s):  
Dylan Cawthorne ◽  
Aimee Robbins-van Wynsberghe

Abstract The use of drones in public healthcare is suggested as a means to improve efficiency under constrained resources and personnel. This paper begins by framing drones in healthcare as a social experiment where ethical guidelines are needed to protect those impacted while fully realizing the benefits the technology offers. Then we propose an ethical framework to facilitate the design, development, implementation, and assessment of drones used in public healthcare. Given the healthcare context, we structure the framework according to the four bioethics principles: beneficence, non-maleficence, autonomy, and justice, plus a fifth principle from artificial intelligence ethics: explicability. These principles are abstract which makes operationalization a challenge; therefore, we suggest an approach of translation according to a values hierarchy whereby the top-level ethical principles are translated into relevant human values within the domain. The resulting framework is an applied ethics tool that facilitates awareness of relevant ethical issues during the design, development, implementation, and assessment of drones in public healthcare.

2019 ◽  
Vol 15 (3) ◽  
pp. 111-127
Author(s):  
Tiziana C. Callari ◽  
Louise Moody ◽  
Janet Saunders ◽  
Gill Ward ◽  
Julie Woodley

Living Lab (LL) research should follow clear ethical guidelines and principles. While these exist in specific disciplinary contexts, there is a lack of tailored and specific ethical guidelines for the design, development, and implementation of LL projects. As well as the complexity of these dynamic and multi-faceted contexts, the engagement of older adults, and adults with reducing cognitive and physical capacity in LL research, poses additional ethical challenges. Semi-structured interviews were undertaken with 26 participants to understand multistakeholder experiences related to user engagement and related ethical issues in emerging LL research. The participants’ experiences and concerns are reported and translated into an ethical framework to guide future LL research initiatives.


2021 ◽  
Vol 3 (10) ◽  
Author(s):  
Bianca Weber-Lewerenz

AbstractDigitization is developing fast and has become a powerful tool for digital planning, construction and operations, for instance digital twins. Now is the right time for constructive approaches and to apply ethics-by-design in order to develop and implement a safe and efficient artificial intelligence (AI) application. So far, no study has addressed the key research question: Where can corporate digital responsibility (CDR) be allocated, and how shall an adequate ethical framework be designed to support digital innovations in order to make full use of the potentials of digitization and AI? Therefore, the research on how best practices meet their corporate responsibility in the digital transformation process and the requirements of the EU for trustworthy AI and its human-friendly use is essential. Its transformation bears a high potential for companies, is critical for success and thus, requires responsible handling. This study generates data by conducting case studies and interviewing experts as part of the qualitative method to win profound insights into applied practice. It provides an assessment of demands stated in the Sustainable Development Goals by the United Nations (SDGs), White Papers on AI by international institutions, European Commission and German Government requesting the consideration and protection of values and fundamental rights, the careful demarcation between machine (artificial) and human intelligence and the careful use of such technologies. The study discusses digitization and the impacts of AI in construction engineering from an ethical perspective. This research critically evaluates opportunities and risks concerning CDR in construction industry. To the author’s knowledge, no study has set out to investigate how CDR in construction could be conceptualized, especially in relation to digitization and AI, to mitigate digital transformation both in large, medium- and small-sized companies. This study applies a holistic, interdisciplinary, inclusive approach to provide guidelines for orientation and examine benefits as well as risks of AI. Furthermore, the goal is to define ethical principles which are key for success, resource-cost-time efficiency and sustainability using digital technologies and AI in construction engineering to enhance digital transformation. This study concludes that innovative corporate organizations starting new business models are more likely to succeed than those dominated by a more conservative, traditional attitude.


2020 ◽  
Vol 31 (2) ◽  
pp. 74-87 ◽  
Author(s):  
Keng Siau ◽  
Weiyu Wang

Artificial intelligence (AI)-based technology has achieved many great things, such as facial recognition, medical diagnosis, and self-driving cars. AI promises enormous benefits for economic growth, social development, as well as human well-being and safety improvement. However, the low-level of explainability, data biases, data security, data privacy, and ethical problems of AI-based technology pose significant risks for users, developers, humanity, and societies. As AI advances, one critical issue is how to address the ethical and moral challenges associated with AI. Even though the concept of “machine ethics” was proposed around 2006, AI ethics is still in the infancy stage. AI ethics is the field related to the study of ethical issues in AI. To address AI ethics, one needs to consider the ethics of AI and how to build ethical AI. Ethics of AI studies the ethical principles, rules, guidelines, policies, and regulations that are related to AI. Ethical AI is an AI that performs and behaves ethically. One must recognize and understand the potential ethical and moral issues that may be caused by AI to formulate the necessary ethical principles, rules, guidelines, policies, and regulations for AI (i.e., Ethics of AI). With the appropriate ethics of AI, one can then build AI that exhibits ethical behavior (i.e., Ethical AI). This paper will discuss AI ethics by looking at the ethics of AI and ethical AI. What are the perceived ethical and moral issues with AI? What are the general and common ethical principles, rules, guidelines, policies, and regulations that can resolve or at least attenuate these ethical and moral issues with AI? What are some of the necessary features and characteristics of an ethical AI? How to adhere to the ethics of AI to build ethical AI?


Author(s):  
Anri Leimanis

Advances in Artificial Intelligence (AI) applications to education have encouraged an extensive global discourse on the underlying ethical principles and values. In a response numerous research institutions, companies, public agencies and non-governmental entities around the globe have published their own guidelines and / or policies for ethical AI. Even though the aim for most of the guidelines is to maximize the benefits that AI delivers to education, the policies differ significantly in content as well as application. In order to facilitate further discussion about the ethical principles, responsibilities of educational institutions using AI and to potentially arrive at a consensus concerning safe and desirable uses of AI in education, this paper performs an evaluation of the self-imposed AI ethics guidelines identifying the common principles and approaches as well as drawbacks limiting the practical and legal application of the policies.


2021 ◽  
Vol 11 ◽  
Author(s):  
Stéphane Mouchabac ◽  
Vladimir Adrien ◽  
Clara Falala-Séchet ◽  
Olivier Bonnot ◽  
Redwan Maatoug ◽  
...  

The patient's decision-making abilities are often altered in psychiatric disorders. The legal framework of psychiatric advance directives (PADs) has been made to provide care to patients in these situations while respecting their free and informed consent. The implementation of artificial intelligence (AI) within Clinical Decision Support Systems (CDSS) may result in improvements for complex decisions that are often made in situations covered by PADs. Still, it raises theoretical and ethical issues this paper aims to address. First, it goes through every level of possible intervention of AI in the PAD drafting process, beginning with what data sources it could access and if its data processing competencies should be limited, then treating of the opportune moments it should be used and its place in the contractual relationship between each party (patient, caregivers, and trusted person). Second, it focuses on ethical principles and how these principles, whether they are medical principles (autonomy, beneficence, non-maleficence, justice) applied to AI or AI principles (loyalty and vigilance) applied to medicine, should be taken into account in the future of the PAD drafting process. Some general guidelines are proposed in conclusion: AI must remain a decision support system as a partner of each party of the PAD contract; patients should be able to choose a personalized type of AI intervention or no AI intervention at all; they should stay informed, i.e., understand the functioning and relevance of AI thanks to educational programs; finally, a committee should be created for ensuring the principle of vigilance by auditing these new tools in terms of successes, failures, security, and relevance.


Author(s):  
Virginia Dignum

This chapter explores the concept of responsibility in artificial intelligence (AI). Being fundamentally tools, AI systems are fully under the control and responsibility of their owners or users. However, their potential autonomy and capability to learn require that design considers accountability, responsibility, and transparency principles in an explicit and systematic manner. The main concern of Responsible AI is thus the identification of the relative responsibility of all actors involved in the design, development, deployment, and use of AI systems. Firstly, society must be prepared to take responsibility for AI impact. Secondly, Responsible AI implies the need for mechanisms that enable AI systems to act according to ethics and human values. Lastly, Responsible AI is about participation. It is necessary to understand how different people work with and live with AI technologies across cultures in order to develop frameworks for responsible AI.


Author(s):  
Deepak Saxena ◽  
Markus Lamest ◽  
Veena Bansal

Artificial intelligence (AI) systems have become a new reality of modern life. They have become ubiquitous to virtually all socio-economic activities in business and industry. With the extent of AI's influence on our lives, it is an imperative to focus our attention on the ethics of AI. While humans develop their moral and ethical framework via self-awareness and reflection, the current generation of AI lacks these abilities. Drawing from the concept of human-AI hybrid, this chapter offers managerial and developers' action towards responsible machine learning for ethical artificial intelligence. The actions consist of privacy by design, development of explainable AI, identification and removal of inherent biases, and most importantly, using AI as a moral enabler. Application of these action would not only help towards ethical AI; it would also help in supporting moral development of human-AI hybrid.


2021 ◽  
Vol 70 (1) ◽  
pp. 35-53
Author(s):  
Elena Ferioli

The complexity of biobank research has recently increased generating a number of novel ethical issues. In recent years the University of Insubria is committed to provide specific training programs in Bioethics, Applied Ethics and Clinical Ethics aimed to face to critical topics related to medicine, research and biobanking. Actually we design the Insubria Biobank as a research infrastructure with an appropriate Ethical Framework and responsible for the custody of biospecimens and data according to a model of Charitable Trust. So to answer certain questions is crucial: How could biobank respect the trust placed in it? What resources could promote the goals of the biobank? Do professionals require a specific ethical training? This credit of trust must be fed and confirmed by the ethical choices of the biobank and ensuring maximum transparency and traceability of decisions. The aim of the Insubria Biobank is to become an ethical subject to secure the public trust and to define the ethics criteria to be made public and to which the biobank will comply. In our model we propose the prospective involved parties that could guarantee the achievement of this goal: Informed Consent, Charter of Principles and Biobank Ethics Consultation Services (BECS). Our purpose is to offer a Charter of Principles and BECS to help scientists, health care professionals, patients, donors, institutional review board and policymakers, better navigate the ethical issues in biobanking. An exploratory survey to identify the willingness to use BECS represent our future research plan.


2018 ◽  
Vol 38 (05) ◽  
pp. 505-514 ◽  
Author(s):  
Xiaowei Su ◽  
Zachary Simmons

AbstractRecent advances in the genetics of neurologic diseases coupled with improvements in sensitivity and specificity are making genetic testing an increasingly important part of diagnosis and management for neurologists. However, the complex nature of genetic testing, the nuances of multiple result types, and the short- and long-term consequences of genetic diagnoses raise important ethical issues for the clinician. Neurologists must balance the ethical principles of beneficence and nonmaleficence, on the one hand, with patient autonomy on the other hand, when ordering such tests by facilitating shared decision making, carrying out their fiduciary responsibilities to patients, and ensuring that patients have adequate counseling to make informed decisions. This review summarizes ethical issues related to genetic testing for neurologic diseases, with a focus on clinical practice. Informed consent for genetic testing of patients and asymptomatic at-risk family members is discussed. The roles and responsibilities of physicians as genetic counselors are reviewed, including the framing of incidental findings and variants of unknown significance that impact individuals' decisions about whether to pursue genetic testing and what results they wish to know. Disclosure and its consequences for the patient are placed within an ethical framework to permit a better understanding of why genetic testing is different from most other diagnostic testing ordered by physicians. The review ends with clinical vignettes that attempt to place ethical principles into familiar clinical settings involving physicians, patients and their families.


Author(s):  
Jaana Leikas ◽  
Raija Koivisto ◽  
Nadezhda Gotcheva

To gain the potential benefit of autonomous intelligent systems, their design and development need to be aligned with fundamental values and ethical principles. We need new design approaches, methodologies and processes to deploy ethical thought and action in the contexts of autonomous intelligent systems. To open this discussion, this article presents a review of ethical principles in the context of artificial intelligence design, and introduces an ethical framework for designing autonomous intelligent systems. The framework is based on an iterative, multidisciplinary perspective yet a systematic discussion during an Autonomous Intelligent Systems (AIS) design process, and on relevant ethical principles for the concept design of autonomous systems. We propose using scenarios as a tool to capture the essential user’s or stakeholder’s specific qualitative information, which is needed for a systematic analysis of ethical issues in the specific design case.


Sign in / Sign up

Export Citation Format

Share Document