Artificial Intelligence: Managing the ethical challenges

OECD Podcasts ◽  
2021 ◽  
2021 ◽  
pp. 002203452110138
Author(s):  
C.M. Mörch ◽  
S. Atsu ◽  
W. Cai ◽  
X. Li ◽  
S.A. Madathil ◽  
...  

Dentistry increasingly integrates artificial intelligence (AI) to help improve the current state of clinical dental practice. However, this revolutionary technological field raises various complex ethical challenges. The objective of this systematic scoping review is to document the current uses of AI in dentistry and the ethical concerns or challenges they imply. Three health care databases (MEDLINE [PubMed], SciVerse Scopus, and Cochrane Library) and 2 computer science databases (ArXiv, IEEE Xplore) were searched. After identifying 1,553 records, the documents were filtered, and a full-text screening was performed. In total, 178 studies were retained and analyzed by 8 researchers specialized in dentistry, AI, and ethics. The team used Covidence for data extraction and Dedoose for the identification of ethics-related information. PRISMA guidelines were followed. Among the included studies, 130 (73.0%) studies were published after 2016, and 93 (52.2%) were published in journals specialized in computer sciences. The technologies used were neural learning techniques for 75 (42.1%), traditional learning techniques for 76 (42.7%), or a combination of several technologies for 20 (11.2%). Overall, 7 countries contributed to 109 (61.2%) studies. A total of 53 different applications of AI in dentistry were identified, involving most dental specialties. The use of initial data sets for internal validation was reported in 152 (85.4%) studies. Forty-five ethical issues (related to the use AI in dentistry) were reported in 22 (12.4%) studies around 6 principles: prudence (10 times), equity (8), privacy (8), responsibility (6), democratic participation (4), and solidarity (4). The ratio of studies mentioning AI-related ethical issues has remained similar in the past years, showing that there is no increasing interest in the field of dentistry on this topic. This study confirms the growing presence of AI in dentistry and highlights a current lack of information on the ethical challenges surrounding its use. In addition, the scarcity of studies sharing their code could prevent future replications. The authors formulate recommendations to contribute to a more responsible use of AI technologies in dentistry.


Author(s):  
AJung Moon ◽  
Shalaleh Rismani ◽  
H. F. Machiel Van der Loos

Abstract Purpose of Review To summarize the set of roboethics issues that uniquely arise due to the corporeality and physical interaction modalities afforded by robots, irrespective of the degree of artificial intelligence present in the system. Recent Findings One of the recent trends in the discussion of ethics of emerging technologies has been the treatment of roboethics issues as those of “embodied AI,” a subset of AI ethics. In contrast to AI, however, robots leverage human’s natural tendency to be influenced by our physical environment. Recent work in human-robot interaction highlights the impact a robot’s presence, capacity to touch, and move in our physical environment has on people, and helping to articulate the ethical issues particular to the design of interactive robotic systems. Summary The corporeality of interactive robots poses unique sets of ethical challenges. These issues should be considered in the design irrespective of and in addition to the ethics of artificial intelligence implemented in them.


2021 ◽  
pp. 146144482110227
Author(s):  
Erik Hermann

Artificial intelligence (AI) is (re)shaping communication and contributes to (commercial and informational) need satisfaction by means of mass personalization. However, the substantial personalization and targeting opportunities do not come without ethical challenges. Following an AI-for-social-good perspective, the authors systematically scrutinize the ethical challenges of deploying AI for mass personalization of communication content from a multi-stakeholder perspective. The conceptual analysis reveals interdependencies and tensions between ethical principles, which advocate the need of a basic understanding of AI inputs, functioning, agency, and outcomes. By this form of AI literacy, individuals could be empowered to interact with and treat mass-personalized content in a way that promotes individual and social good while preventing harm.


2021 ◽  
Vol 66 (Special Issue) ◽  
pp. 133-133
Author(s):  
Regina Mueller ◽  
◽  
Sebastian Laacke ◽  
Georg Schomerus ◽  
Sabine Salloch ◽  
...  

"Artificial Intelligence (AI) systems are increasingly being developed and various applications are already used in medical practice. This development promises improvements in prediction, diagnostics and treatment decisions. As one example, in the field of psychiatry, AI systems can already successfully detect markers of mental disorders such as depression. By using data from social media (e.g. Instagram or Twitter), users who are at risk of mental disorders can be identified. This potential of AI-based depression detectors (AIDD) opens chances, such as quick and inexpensive diagnoses, but also leads to ethical challenges especially regarding users’ autonomy. The focus of the presentation is on autonomy-related ethical implications of AI systems using social media data to identify users with a high risk of suffering from depression. First, technical examples and potential usage scenarios of AIDD are introduced. Second, it is demonstrated that the traditional concept of patient autonomy according to Beauchamp and Childress does not fully account for the ethical implications associated with AIDD. Third, an extended concept of “Health-Related Digital Autonomy” (HRDA) is presented. Conceptual aspects and normative criteria of HRDA are discussed. As a result, HRDA covers the elusive area between social media users and patients. "


2020 ◽  
Vol 13 ◽  
pp. 175628642093896
Author(s):  
Vida Abedi ◽  
Ayesha Khan ◽  
Durgesh Chaudhary ◽  
Debdipto Misra ◽  
Venkatesh Avula ◽  
...  

Stroke is the fifth leading cause of death in the United States and a major cause of severe disability worldwide. Yet, recognizing the signs of stroke in an acute setting is still challenging and leads to loss of opportunity to intervene, given the narrow therapeutic window. A decision support system using artificial intelligence (AI) and clinical data from electronic health records combined with patients’ presenting symptoms can be designed to support emergency department providers in stroke diagnosis and subsequently reduce the treatment delay. In this article, we present a practical framework to develop a decision support system using AI by reflecting on the various stages, which could eventually improve patient care and outcome. We also discuss the technical, operational, and ethical challenges of the process.


2019 ◽  
Vol 32 (5) ◽  
pp. 272-275 ◽  
Author(s):  
Eric Racine ◽  
Wren Boehlen ◽  
Matthew Sample

Forms of Artificial Intelligence (AI), like deep learning algorithms and neural networks, are being intensely explored for novel healthcare applications in areas such as imaging and diagnoses, risk analysis, lifestyle management and monitoring, health information management, and virtual health assistance. Expected benefits in these areas are wide-ranging and include increased speed in imaging, greater insight into predictive screening, and decreased healthcare costs and inefficiency. However, AI-based clinical tools also create a host of situations wherein commonly-held values and ethical principles may be challenged. In this short column, we highlight three potentially problematic aspects of AI use in healthcare: (1) dynamic information and consent, (2) transparency and ownership, and (3) privacy and discrimination. We discuss their impact on patient/client, clinician, and health institution values and suggest ways to tackle this impact. We propose that AI-related ethical challenges may represent an opportunity for growth in organizations.


2019 ◽  
Vol 2019 ◽  
Author(s):  
Paul Henman

Globally there is strong enthusiasm for using Artificial Intelligence (AI) in government decision making, yet this technocratic approach is not without significant downsides including bias, exacerbating discrimination and inequalities, and reducing government accountability and transparency. A flurry of analytical and policy work has recently sought to identify principles, policies, regulations and institutions for enacting ethical AI. Yet, what is lacking is a practical framework and means by which AI can be assessed as un/ethical. This paper provides an overview of an applied analytical framework for assessing the ethics of AI. It notes that AI (or algorithmic) decision-making is an outcome of data, code, context and use. Using these four categories, the paper articulates key questions necessary to determine the potential ethical challenges of using an AI/algorithm in decision making, and provides the basis for their articulation within a practical toolkit that can be demonstrated against known AI decision-making tools.


Author(s):  
Oloruntoba Samson Abiodun ◽  
Akinode John Lekan

In recent years, there has been massive progress in Artificial Intelligence (AI) with the development of machine learning, deep neural networks, natural language processing, computer vision and robotics. These techniques are now actively being applied in the judiciary with many of the legal service activities currently being delivered by lawyers predicted to be taken over by AI in the coming years. This paper explores the potentials and efficiency of Artificial intelligence (AI) in justice delivery. The paper has two objectives: first to highlight the main applications of AI in justice administrations through some examples of AI tools recently developed; second, to assess the ethical challenges of AI in the judiciary. Artificial Intelligence algorithms are starting to support lawyers, for instance, through artificial intelligence search tools, or to support justice administrations with predictive technologies and business analytics based on the computation of Big Data. Using the concept of Artificial Intelligence (AI), Legal knowledgebased tools may accelerate the service delivery of legal professionals from typical searching of related case journals to extraction of precise information in a customized manner.


Author(s):  
Lu Cheng ◽  
Ahmadreza Mosallanezhad ◽  
Paras Sheth ◽  
Huan Liu

There have been increasing concerns about Artificial Intelligence (AI) due to its unfathomable potential power. To make AI address ethical challenges and shun undesirable outcomes, researchers proposed to develop socially responsible AI (SRAI). One of these approaches is causal learning (CL). We survey state-of-the-art methods of CL for SRAI. We begin by examining the seven CL tools to enhance the social responsibility of AI, then review how existing works have succeeded using these tools to tackle issues in developing SRAI such as fairness. The goal of this survey is to bring forefront the potentials and promises of CL for SRAI.


Sign in / Sign up

Export Citation Format

Share Document