scholarly journals Towards a framework for understanding societal and ethical implications of Artificial Intelligence

2020 ◽  
pp. 89-100
Author(s):  
Richard Benjamins ◽  
Idoia Salazar García
Author(s):  
Andrea Renda

This chapter assesses Europe’s efforts in developing a full-fledged strategy on the human and ethical implications of artificial intelligence (AI). The strong focus on ethics in the European Union’s AI strategy should be seen in the context of an overall strategy that aims at protecting citizens and civil society from abuses of digital technology but also as part of a competitiveness-oriented strategy aimed at raising the standards for access to Europe’s wealthy Single Market. In this context, one of the most peculiar steps in the European Union’s strategy was the creation of an independent High-Level Expert Group on AI (AI HLEG), accompanied by the launch of an AI Alliance, which quickly attracted several hundred participants. The AI HLEG, a multistakeholder group including fifty-two experts, was tasked with the definition of Ethics Guidelines as well as with the formulation of “Policy and Investment Recommendations.” With the advice of the AI HLEG, the European Commission put forward ethical guidelines for Trustworthy AI—which are now paving the way for a comprehensive, risk-based policy framework.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Kathleen Murphy ◽  
Erica Di Ruggiero ◽  
Ross Upshur ◽  
Donald J. Willison ◽  
Neha Malhotra ◽  
...  

Abstract Background Artificial intelligence (AI) has been described as the “fourth industrial revolution” with transformative and global implications, including in healthcare, public health, and global health. AI approaches hold promise for improving health systems worldwide, as well as individual and population health outcomes. While AI may have potential for advancing health equity within and between countries, we must consider the ethical implications of its deployment in order to mitigate its potential harms, particularly for the most vulnerable. This scoping review addresses the following question: What ethical issues have been identified in relation to AI in the field of health, including from a global health perspective? Methods Eight electronic databases were searched for peer reviewed and grey literature published before April 2018 using the concepts of health, ethics, and AI, and their related terms. Records were independently screened by two reviewers and were included if they reported on AI in relation to health and ethics and were written in the English language. Data was charted on a piloted data charting form, and a descriptive and thematic analysis was performed. Results Upon reviewing 12,722 articles, 103 met the predetermined inclusion criteria. The literature was primarily focused on the ethics of AI in health care, particularly on carer robots, diagnostics, and precision medicine, but was largely silent on ethics of AI in public and population health. The literature highlighted a number of common ethical concerns related to privacy, trust, accountability and responsibility, and bias. Largely missing from the literature was the ethics of AI in global health, particularly in the context of low- and middle-income countries (LMICs). Conclusions The ethical issues surrounding AI in the field of health are both vast and complex. While AI holds the potential to improve health and health systems, our analysis suggests that its introduction should be approached with cautious optimism. The dearth of literature on the ethics of AI within LMICs, as well as in public health, also points to a critical need for further research into the ethical implications of AI within both global and public health, to ensure that its development and implementation is ethical for everyone, everywhere.


2021 ◽  
Vol 66 (Special Issue) ◽  
pp. 133-133
Author(s):  
Regina Mueller ◽  
◽  
Sebastian Laacke ◽  
Georg Schomerus ◽  
Sabine Salloch ◽  
...  

"Artificial Intelligence (AI) systems are increasingly being developed and various applications are already used in medical practice. This development promises improvements in prediction, diagnostics and treatment decisions. As one example, in the field of psychiatry, AI systems can already successfully detect markers of mental disorders such as depression. By using data from social media (e.g. Instagram or Twitter), users who are at risk of mental disorders can be identified. This potential of AI-based depression detectors (AIDD) opens chances, such as quick and inexpensive diagnoses, but also leads to ethical challenges especially regarding users’ autonomy. The focus of the presentation is on autonomy-related ethical implications of AI systems using social media data to identify users with a high risk of suffering from depression. First, technical examples and potential usage scenarios of AIDD are introduced. Second, it is demonstrated that the traditional concept of patient autonomy according to Beauchamp and Childress does not fully account for the ethical implications associated with AIDD. Third, an extended concept of “Health-Related Digital Autonomy” (HRDA) is presented. Conceptual aspects and normative criteria of HRDA are discussed. As a result, HRDA covers the elusive area between social media users and patients. "


2020 ◽  
Vol 6 (2) ◽  
pp. 135-161
Author(s):  
Diego Alejandro Borbón Rodríguez ◽  
◽  
Luisa Fernanda Borbón Rodríguez ◽  
Jeniffer Laverde Pinzón

Advances in neurotechnologies and artificial intelligence have led to an innovative proposal to establish ethical and legal limits to the development of technologies: Human NeuroRights. In this sense, the article addresses, first, some advances in neurotechnologies and artificial intelligence, as well as their ethical implications. Second, the state of the art on the innovative proposal of Human NeuroRights is exposed, specifically, the proposal of the NeuroRights Initiative of Columbia University. Third, the proposal for the rights of free will and equitable access to augmentation technologies is critically analyzed to conclude that, although it is necessary to propose new regulations for neurotechnologies and artificial intelligence, the debate is still very premature as if to try to incorporate a new category of human rights that may be inconvenient or unnecessary. Finally, some considerations on how to regulate new technologies are explained and the conclusions of the work are presented.


Author(s):  
Christina L. McDowell Marinchak ◽  
Edward Forrest ◽  
Bogdan Hoanca

This entry will review the state of the art in AI, with a particular focus on applications in marketing. Based on the current capabilities of AI in marketing, the author's explore the new rules of engagement. Rather than simply targeting consumers, the marketing effort will also be directed at the algorithms controlling the consumers' virtual personal assistants (VPAs). Rather than exploiting human desires and weakness, marketing will need to focus on meeting the user's actual needs. The level of customer satisfaction will be even more critical as marketing will need to focus on establishing and maintaining a reputation in competition with those of similar offerings in the marketplace. This entry concludes with thoughts on the long-term implications, exploring the role of customer trust in the adoption of AI agents, the security requirements for agents and the ethical implications of access to such agents.


2019 ◽  
Vol 33 (02) ◽  
pp. 141-158 ◽  
Author(s):  
Steven Livingston ◽  
Mathias Risse

AbstractWhat are the implications of artificial intelligence (AI) on human rights in the next three decades? Precise answers to this question are made difficult by the rapid rate of innovation in AI research and by the effects of human practices on the adaption of new technologies. Precise answers are also challenged by imprecise usages of the term “AI.” There are several types of research that all fall under this general term. We begin by clarifying what we mean by AI. Most of our attention is then focused on the implications of artificial general intelligence (AGI), which entail that an algorithm or group of algorithms will achieve something like superintelligence. While acknowledging that the feasibility of superintelligence is contested, we consider the moral and ethical implications of such a potential development. What do machines owe humans and what do humans owe superintelligent machines?


2020 ◽  
Author(s):  
Kathleen Murphy ◽  
Erica Di Ruggiero ◽  
Ross Upshur ◽  
Donald J. Willison ◽  
Neha Malhotra ◽  
...  

Abstract Background Artificial intelligence (AI) has been described as the “fourth industrial revolution” with transformative and global implications, including in healthcare, public health, and global health. AI approaches hold promise for improving health systems worldwide, as well as individual and population health outcomes. While AI may have the potential to advance health equity within and between countries, we must consider the ethical implications of its deployment in order to mitigate its potential harms, particularly for the most vulnerable. This scoping review addresses the following question: What ethical issues have been identified in relation to AI in the field of health, including from a global health perspective? Methods Eight electronic databases were searched for peer reviewed and grey literature using the overarching concepts of health, ethics, and AI, and their related terms. Records were independently screened by two reviewers and were included if they reported on AI in relation to health and ethics and were written in the English language. Data was charted on a piloted data abstraction form, and a descriptive and thematic analysis was performed. Results Upon reviewing 12,722 articles, 103 met the predetermined inclusion criteria. The literature was primarily focused on the ethics of AI in health care, particularly on carer robots, diagnostics, and precision medicine, but was largely silent on ethics of AI in public and population health. The literature highlighted a number of common ethical concerns related to privacy, trust, accountability, and bias. Largely missing from the reviewed literature was the ethics of AI in global health, particularly in the context of low- and middle-income countries (LMICs). Conclusions The ethical issues surrounding AI in the field of health are both vast and complex. While AI holds the potential to improve health and health systems, our analysis suggests that its introduction should be approached with cautious optimism. The dearth of literature on the ethics of AI within LMICs, as well as in public health, also points to a critical need for further research into the ethical implications of AI within both global and public health, to ensure that its development and implementation is ethical for everyone, everywhere.


Entropy ◽  
2021 ◽  
Vol 23 (8) ◽  
pp. 1047
Author(s):  
Danilo Franco ◽  
Luca Oneto ◽  
Nicolò Navarin ◽  
Davide Anguita

In many decision-making scenarios, ranging from recreational activities to healthcare and policing, the use of artificial intelligence coupled with the ability to learn from historical data is becoming ubiquitous. This widespread adoption of automated systems is accompanied by the increasing concerns regarding their ethical implications. Fundamental rights, such as the ones that require the preservation of privacy, do not discriminate based on sensible attributes (e.g., gender, ethnicity, political/sexual orientation), or require one to provide an explanation for a decision, are daily undermined by the use of increasingly complex and less understandable yet more accurate learning algorithms. For this purpose, in this work, we work toward the development of systems able to ensure trustworthiness by delivering privacy, fairness, and explainability by design. In particular, we show that it is possible to simultaneously learn from data while preserving the privacy of the individuals thanks to the use of Homomorphic Encryption, ensuring fairness by learning a fair representation from the data, and ensuring explainable decisions with local and global explanations without compromising the accuracy of the final models. We test our approach on a widespread but still controversial application, namely face recognition, using the recent FairFace dataset to prove the validity of our approach.


AI and Ethics ◽  
2021 ◽  
Author(s):  
Kamolov Sergei ◽  
Kriebitz Alexander ◽  
Eliseeva Polina ◽  
Aleksandrov Nikita

AbstractThe discourse on the ethics of artificial intelligence (AI) has generated a plethora of different conventions, principles and guidelines outlining an ethical perspective on the use and research of AI. However, when it comes to breaking down general implications to specific use cases, existent frameworks have been remaining vague. The following paper aims to fill this gap by examining the ethical implications of the use of information analytical systems through a management approach for filtering the content in social media and preventing information thrusts with negative consequences for human beings and public administration. The ethical dimensions of AI technologies are revealed through deduction of general challenges of digital governance to applied level management technics.


Sign in / Sign up

Export Citation Format

Share Document