scholarly journals Ethics of Corporeal, Co-present Robots as Agents of Influence: a Review

Author(s):  
AJung Moon ◽  
Shalaleh Rismani ◽  
H. F. Machiel Van der Loos

Abstract Purpose of Review To summarize the set of roboethics issues that uniquely arise due to the corporeality and physical interaction modalities afforded by robots, irrespective of the degree of artificial intelligence present in the system. Recent Findings One of the recent trends in the discussion of ethics of emerging technologies has been the treatment of roboethics issues as those of “embodied AI,” a subset of AI ethics. In contrast to AI, however, robots leverage human’s natural tendency to be influenced by our physical environment. Recent work in human-robot interaction highlights the impact a robot’s presence, capacity to touch, and move in our physical environment has on people, and helping to articulate the ethical issues particular to the design of interactive robotic systems. Summary The corporeality of interactive robots poses unique sets of ethical challenges. These issues should be considered in the design irrespective of and in addition to the ethics of artificial intelligence implemented in them.

This book explores the intertwining domains of artificial intelligence (AI) and ethics—two highly divergent fields which at first seem to have nothing to do with one another. AI is a collection of computational methods for studying human knowledge, learning, and behavior, including by building agents able to know, learn, and behave. Ethics is a body of human knowledge—far from completely understood—that helps agents (humans today, but perhaps eventually robots and other AIs) decide how they and others should behave. Despite these differences, however, the rapid development in AI technology today has led to a growing number of ethical issues in a multitude of fields, ranging from disciplines as far-reaching as international human rights law to issues as intimate as personal identity and sexuality. In fact, the number and variety of topics in this volume illustrate the width, diversity of content, and at times exasperating vagueness of the boundaries of “AI Ethics” as a domain of inquiry. Within this discourse, the book points to the capacity of sociotechnical systems that utilize data-driven algorithms to classify, to make decisions, and to control complex systems. Given the wide-reaching and often intimate impact these AI systems have on daily human lives, this volume attempts to address the increasingly complicated relations between humanity and artificial intelligence. It considers not only how humanity must conduct themselves toward AI but also how AI must behave toward humanity.


2021 ◽  
pp. 002203452110138
Author(s):  
C.M. Mörch ◽  
S. Atsu ◽  
W. Cai ◽  
X. Li ◽  
S.A. Madathil ◽  
...  

Dentistry increasingly integrates artificial intelligence (AI) to help improve the current state of clinical dental practice. However, this revolutionary technological field raises various complex ethical challenges. The objective of this systematic scoping review is to document the current uses of AI in dentistry and the ethical concerns or challenges they imply. Three health care databases (MEDLINE [PubMed], SciVerse Scopus, and Cochrane Library) and 2 computer science databases (ArXiv, IEEE Xplore) were searched. After identifying 1,553 records, the documents were filtered, and a full-text screening was performed. In total, 178 studies were retained and analyzed by 8 researchers specialized in dentistry, AI, and ethics. The team used Covidence for data extraction and Dedoose for the identification of ethics-related information. PRISMA guidelines were followed. Among the included studies, 130 (73.0%) studies were published after 2016, and 93 (52.2%) were published in journals specialized in computer sciences. The technologies used were neural learning techniques for 75 (42.1%), traditional learning techniques for 76 (42.7%), or a combination of several technologies for 20 (11.2%). Overall, 7 countries contributed to 109 (61.2%) studies. A total of 53 different applications of AI in dentistry were identified, involving most dental specialties. The use of initial data sets for internal validation was reported in 152 (85.4%) studies. Forty-five ethical issues (related to the use AI in dentistry) were reported in 22 (12.4%) studies around 6 principles: prudence (10 times), equity (8), privacy (8), responsibility (6), democratic participation (4), and solidarity (4). The ratio of studies mentioning AI-related ethical issues has remained similar in the past years, showing that there is no increasing interest in the field of dentistry on this topic. This study confirms the growing presence of AI in dentistry and highlights a current lack of information on the ethical challenges surrounding its use. In addition, the scarcity of studies sharing their code could prevent future replications. The authors formulate recommendations to contribute to a more responsible use of AI technologies in dentistry.


Author(s):  
Jessica Morley ◽  
Anat Elhalal ◽  
Francesca Garcia ◽  
Libby Kinsey ◽  
Jakob Mökander ◽  
...  

AbstractAs the range of potential uses for Artificial Intelligence (AI), in particular machine learning (ML), has increased, so has awareness of the associated ethical issues. This increased awareness has led to the realisation that existing legislation and regulation provides insufficient protection to individuals, groups, society, and the environment from AI harms. In response to this realisation, there has been a proliferation of principle-based ethics codes, guidelines and frameworks. However, it has become increasingly clear that a significant gap exists between the theory of AI ethics principles and the practical design of AI systems. In previous work, we analysed whether it is possible to close this gap between the ‘what’ and the ‘how’ of AI ethics through the use of tools and methods designed to help AI developers, engineers, and designers translate principles into practice. We concluded that this method of closure is currently ineffective as almost all existing translational tools and methods are either too flexible (and thus vulnerable to ethics washing) or too strict (unresponsive to context). This raised the question: if, even with technical guidance, AI ethics is challenging to embed in the process of algorithmic design, is the entire pro-ethical design endeavour rendered futile? And, if no, then how can AI ethics be made useful for AI practitioners? This is the question we seek to address here by exploring why principles and technical translational tools are still needed even if they are limited, and how these limitations can be potentially overcome by providing theoretical grounding of a concept that has been termed ‘Ethics as a Service.’


2022 ◽  
Vol 6 ◽  
pp. 263
Author(s):  
Melati Nungsari ◽  
Chuah Hui Yin ◽  
Nicole Fong ◽  
Veena Pillai

Background: Globally, vulnerable populations have been disproportionately affected by the COVID-19 pandemic and subsequent responses, such as lockdown measures and mass vaccinations. Numerous ethical challenges have arisen at different levels, be it at the policy-making level or on the ground. For example, policymakers have to contain a highly contagious disease with high morbidity using scarce resources, while minimizing the medium- to long-term social and economic impacts induced by containment measures. This study explores the impact of COVID-19 on vulnerable populations in Malaysia by using an intersectional framework that accounts for overlapping forms of marginalization.   Methods: This study utilizes in-depth qualitative data obtained from 34 individuals and organizations to understand the impact of the COVID-19 outbreak on vulnerable populations in Malaysia. We utilize four principles of ethics to guide our coding and interpretation of the data – namely beneficence, non-maleficence, justice and autonomy. We utilize a frequency analysis to roughly understand the types of ethical issues that emerged. Using hermeneutic content analysis (HCA), we then explore how the principles interact with each other. Results: Through the frequently analysis, we found that although beneficence was very prevalent in our dataset, so was a significant amount of harm – as perpetuated through injustice, the removal or lack of autonomy and maleficence. We also unearthed a worrying landscape of harm and deep systemic issues associated with a lack of support for vulnerable households – further exacerbated during the pandemic. Conclusions: Policy recommendations for aid organizations and society to mitigate these ethical problems are presented, such as long overdue institutional reforms and stronger ethical practices rooted in human rights principles, which government agencies and aid providers can then use in the provision of aid to vulnerable populations.


Author(s):  
Nandini Sen

This chapter aims to create new knowledge regarding artificial intelligence (AI) ethics and relevant subjects while reviewing ethical relationship between human beings and AI/robotics and linking between the moral fabric or the ethical issues of AI as used in fictions and films. It carefully analyses how a human being will love robot and vice versa. Here, fictions and films are not just about technology but about their feelings and the nature of bonding between AIs and the human race. Ordinary human beings distrust and then start to like AIs. However, if the AI becomes a rogue as seen in many fictions and films, then the AI is taken down to avoid the destruction of the human beings. Scientists like Turing are champions of robot/AI's feelings. Fictional and movie AIs are developed to keenly watch and comprehend humans. These actions are so close to empathy they amount to consciousness and emotional quotient.


2019 ◽  
Vol 6 (1) ◽  
pp. 205395171986054 ◽  
Author(s):  
Heike Felzmann ◽  
Eduard Fosch Villaronga ◽  
Christoph Lutz ◽  
Aurelia Tamò-Larrieux

Transparency is now a fundamental principle for data processing under the General Data Protection Regulation. We explore what this requirement entails for artificial intelligence and automated decision-making systems. We address the topic of transparency in artificial intelligence by integrating legal, social, and ethical aspects. We first investigate the ratio legis of the transparency requirement in the General Data Protection Regulation and its ethical underpinnings, showing its focus on the provision of information and explanation. We then discuss the pitfalls with respect to this requirement by focusing on the significance of contextual and performative factors in the implementation of transparency. We show that human–computer interaction and human-robot interaction literature do not provide clear results with respect to the benefits of transparency for users of artificial intelligence technologies due to the impact of a wide range of contextual factors, including performative aspects. We conclude by integrating the information- and explanation-based approach to transparency with the critical contextual approach, proposing that transparency as required by the General Data Protection Regulation in itself may be insufficient to achieve the positive goals associated with transparency. Instead, we propose to understand transparency relationally, where information provision is conceptualized as communication between technology providers and users, and where assessments of trustworthiness based on contextual factors mediate the value of transparency communications. This relational concept of transparency points to future research directions for the study of transparency in artificial intelligence systems and should be taken into account in policymaking.


2020 ◽  
Vol 31 (2) ◽  
pp. 74-87 ◽  
Author(s):  
Keng Siau ◽  
Weiyu Wang

Artificial intelligence (AI)-based technology has achieved many great things, such as facial recognition, medical diagnosis, and self-driving cars. AI promises enormous benefits for economic growth, social development, as well as human well-being and safety improvement. However, the low-level of explainability, data biases, data security, data privacy, and ethical problems of AI-based technology pose significant risks for users, developers, humanity, and societies. As AI advances, one critical issue is how to address the ethical and moral challenges associated with AI. Even though the concept of “machine ethics” was proposed around 2006, AI ethics is still in the infancy stage. AI ethics is the field related to the study of ethical issues in AI. To address AI ethics, one needs to consider the ethics of AI and how to build ethical AI. Ethics of AI studies the ethical principles, rules, guidelines, policies, and regulations that are related to AI. Ethical AI is an AI that performs and behaves ethically. One must recognize and understand the potential ethical and moral issues that may be caused by AI to formulate the necessary ethical principles, rules, guidelines, policies, and regulations for AI (i.e., Ethics of AI). With the appropriate ethics of AI, one can then build AI that exhibits ethical behavior (i.e., Ethical AI). This paper will discuss AI ethics by looking at the ethics of AI and ethical AI. What are the perceived ethical and moral issues with AI? What are the general and common ethical principles, rules, guidelines, policies, and regulations that can resolve or at least attenuate these ethical and moral issues with AI? What are some of the necessary features and characteristics of an ethical AI? How to adhere to the ethics of AI to build ethical AI?


2013 ◽  
Vol 35 (3) ◽  
pp. 211-227 ◽  
Author(s):  
Michael Sude

The impact of technology on mental health practice is currently a concern in the counseling literature, and several articles have discussed using different types of technology in practice. In particular, many private practitioners use a cell phone for business. However, no article has discussed ethical concerns and best practices for the use of short message service (SMS), better known as text messaging (TM). Ethical issues that arise with TM relate to confidentiality, documentation, counselor competence, appropriateness of use, and misinterpretation. There are also such boundary issues to consider as multiple relationships, counselor availability, and billing. This article addresses ethical concerns for mental health counselors who use TM in private practice. It reviews the literature and discusses benefits, ethical concerns, and guidelines for office policies and personal best practices.


2020 ◽  
Vol 10 (3-4) ◽  
pp. 203-220
Author(s):  
Yevhen Laniuk

AbstractThe Society of Control is a philosophical concept developed by Gilles Deleuze in the early 1990s to highlight the transition from Michel Foucault’s Disciplinary Society to a new social constitution of power assisted by digital technologies. The Society of Control is organized around switches, which convert data, and, in this way, exercise power. These switches take data inputs (digitized information about individuals) and transform them into outputs (decisions) based on their pre-programmed instructions. I call these switches “automated decision-making algorithms” (ADMAs) and look at ethical issues that arise from their impact on human freedom. I distinguish between negative and positive aspects of freedom and examine the impact of the ADMAs on both. My main argument is that freedom becomes endangered in this new ecosystem of computerized control, which makes individuals powerless in new and unprecedented ways. Finally, I suggest a few ways to recover freedom, while preserving the economic benefits of the ADMAs.


2021 ◽  
Vol 6 ◽  
pp. 263
Author(s):  
Melati Nungsari ◽  
Hui Yin Chuah ◽  
Nicole Fong ◽  
Veena Pillai

Background: Globally, vulnerable populations have been disproportionately affected by the COVID-19 pandemic and subsequent responses, such as lockdown measures and mass vaccinations. Numerous ethical challenges have arisen at different levels, be it at the policy-making level or on the ground. For example, policymakers have to contain a highly contagious disease with high morbidity using scarce resources, while minimizing the medium- to long-term social and economic impacts induced by containment measures. This study explores the impact of COVID-19 on vulnerable populations in Malaysia by using an intersectional framework that accounts for overlapping forms of marginalization.   Methods: This study utilizes in-depth qualitative data obtained from 34 individuals and organizations to understand the impact of the COVID-19 outbreak on vulnerable populations in Malaysia. We utilize four principles of ethics to guide our coding and interpretation of the data – namely beneficence, non-maleficence, justice and autonomy. We utilize a frequency analysis to roughly understand the types of ethical issues that emerged. Using hermeneutic content analysis (HCA), we then explore how the principles interact with each other. Results: Through the frequently analysis, we found that although beneficence was very prevalent in our dataset, so was a significant amount of harm – as perpetuated through injustice, the removal or lack of autonomy and maleficence. We also unearthed a worrying landscape of harm and deep systemic issues associated with a lack of support for vulnerable households – further exacerbated during the pandemic. Conclusions: Policy recommendations for aid organizations and society to mitigate these ethical problems are presented, such as long overdue institutional reforms and stronger ethical practices rooted in human rights principles, which government agencies and aid providers can then use in the provision of aid to vulnerable populations.


Sign in / Sign up

Export Citation Format

Share Document