scholarly journals The interrelation between data and AI ethics in the context of impact assessments

AI and Ethics ◽  
2020 ◽  
Author(s):  
Emre Kazim ◽  
Adriano Koshiyama

AbstractIn the growing literature on artificial intelligence (AI) impact assessments, the literature on data protection impact assessments is heavily referenced. Given the relative maturity of the data protection debate and that it has translated into legal codification, it is indeed a natural place to start for AI. In this article, we anticipate directions in what we believe will become a dominant and impactful forthcoming debate, namely, how to conceptualise the relationship between data protection and AI impact. We begin by discussing the value canvas i.e. the ethical principles that underpin data and AI ethics, and discuss how these are instantiated in the context of value trade-offs when the ethics are applied. Following this, we map three kinds of relationships that can be envisioned between data and AI ethics, and then close with a discussion of asymmetry in value trade-offs when privacy and fairness are concerned.

2020 ◽  
Vol 31 (2) ◽  
pp. 74-87 ◽  
Author(s):  
Keng Siau ◽  
Weiyu Wang

Artificial intelligence (AI)-based technology has achieved many great things, such as facial recognition, medical diagnosis, and self-driving cars. AI promises enormous benefits for economic growth, social development, as well as human well-being and safety improvement. However, the low-level of explainability, data biases, data security, data privacy, and ethical problems of AI-based technology pose significant risks for users, developers, humanity, and societies. As AI advances, one critical issue is how to address the ethical and moral challenges associated with AI. Even though the concept of “machine ethics” was proposed around 2006, AI ethics is still in the infancy stage. AI ethics is the field related to the study of ethical issues in AI. To address AI ethics, one needs to consider the ethics of AI and how to build ethical AI. Ethics of AI studies the ethical principles, rules, guidelines, policies, and regulations that are related to AI. Ethical AI is an AI that performs and behaves ethically. One must recognize and understand the potential ethical and moral issues that may be caused by AI to formulate the necessary ethical principles, rules, guidelines, policies, and regulations for AI (i.e., Ethics of AI). With the appropriate ethics of AI, one can then build AI that exhibits ethical behavior (i.e., Ethical AI). This paper will discuss AI ethics by looking at the ethics of AI and ethical AI. What are the perceived ethical and moral issues with AI? What are the general and common ethical principles, rules, guidelines, policies, and regulations that can resolve or at least attenuate these ethical and moral issues with AI? What are some of the necessary features and characteristics of an ethical AI? How to adhere to the ethics of AI to build ethical AI?


Author(s):  
Anri Leimanis

Advances in Artificial Intelligence (AI) applications to education have encouraged an extensive global discourse on the underlying ethical principles and values. In a response numerous research institutions, companies, public agencies and non-governmental entities around the globe have published their own guidelines and / or policies for ethical AI. Even though the aim for most of the guidelines is to maximize the benefits that AI delivers to education, the policies differ significantly in content as well as application. In order to facilitate further discussion about the ethical principles, responsibilities of educational institutions using AI and to potentially arrive at a consensus concerning safe and desirable uses of AI in education, this paper performs an evaluation of the self-imposed AI ethics guidelines identifying the common principles and approaches as well as drawbacks limiting the practical and legal application of the policies.


2021 ◽  
Vol 29 ◽  
Author(s):  
Catharina Rudschies ◽  
Ingrid Schneider ◽  
Judith Simon

In the current debate on the ethics of Artificial Intelligence (AI) much attention has been paid to find some “common ground” in the numerous AI ethics guidelines. The divergences, however, are equally important as they shed light on the conflicts and controversies that require further debate. This paper analyses the AI ethics landscape with a focus on divergences across actor types (public, expert, and private actors). It finds that the differences in actors’ priorities for ethical principles influence the overall outcome of the debate. It shows that determining “minimum requirements” or “primary principles” on the basis of frequency excludes many principles that are subject to controversy, but might still be ethically relevant. The results are discussed in the light of value pluralism, suggesting that the plurality of sets of principles must be acknowledged and can be used to further the debate.


2020 ◽  
Vol 30 (1) ◽  
pp. 99-120 ◽  
Author(s):  
Thilo Hagendorff

Abstract Current advances in research, development and application of artificial intelligence (AI) systems have yielded a far-reaching discourse on AI ethics. In consequence, a number of ethics guidelines have been released in recent years. These guidelines comprise normative principles and recommendations aimed to harness the “disruptive” potentials of new AI technologies. Designed as a semi-systematic evaluation, this paper analyzes and compares 22 guidelines, highlighting overlaps but also omissions. As a result, I give a detailed overview of the field of AI ethics. Finally, I also examine to what extent the respective ethical principles and values are implemented in the practice of research, development and application of AI systems—and how the effectiveness in the demands of AI ethics can be improved.


Author(s):  
Erik Hermann

AbstractArtificial intelligence (AI) is (re)shaping strategy, activities, interactions, and relationships in business and specifically in marketing. The drawback of the substantial opportunities AI systems and applications (will) provide in marketing are ethical controversies. Building on the literature on AI ethics, the authors systematically scrutinize the ethical challenges of deploying AI in marketing from a multi-stakeholder perspective. By revealing interdependencies and tensions between ethical principles, the authors shed light on the applicability of a purely principled, deontological approach to AI ethics in marketing. To reconcile some of these tensions and account for the AI-for-social-good perspective, the authors make suggestions of how AI in marketing can be leveraged to promote societal and environmental well-being.


AI & Society ◽  
2021 ◽  
Author(s):  
Joris Krijger

AbstractAs artificial intelligence (AI) deployment is growing exponentially, questions have been raised whether the developed AI ethics discourse is apt to address the currently pressing questions in the field. Building on critical theory, this article aims to expand the scope of AI ethics by arguing that in addition to ethical principles and design, the organizational dimension (i.e. the background assumptions and values influencing design processes) plays a pivotal role in the operationalization of ethics in AI development and deployment contexts. Through the prism of critical theory, and the notions of underdetermination and technical code as developed by Feenberg in particular, the organizational dimension is related to two general challenges in operationalizing ethical principles in AI: (a) the challenge of ethical principles placing conflicting demands on an AI design that cannot be satisfied simultaneously, for which the term ‘inter-principle tension’ is coined, and (b) the challenge of translating an ethical principle to a technological form, constraint or demand, for which the term ‘intra-principle tension’ is coined. Rather than discussing principles, methods or metrics, the notion of technical code precipitates a discussion on the subsequent questions of value decisions, governance and procedural checks and balances. It is held that including and interrogating the organizational context in AI ethics approaches allows for a more in depth understanding of the current challenges concerning the formalization and implementation of ethical principles as well as of the ways in which these challenges could be met.


2020 ◽  
Vol 17 (6) ◽  
pp. 76-91
Author(s):  
E. D. Solozhentsev

The scientific problem of economics “Managing the quality of human life” is formulated on the basis of artificial intelligence, algebra of logic and logical-probabilistic calculus. Managing the quality of human life is represented by managing the processes of his treatment, training and decision making. Events in these processes and the corresponding logical variables relate to the behavior of a person, other persons and infrastructure. The processes of the quality of human life are modeled, analyzed and managed with the participation of the person himself. Scenarios and structural, logical and probabilistic models of managing the quality of human life are given. Special software for quality management is described. The relationship of human quality of life and the digital economy is examined. We consider the role of public opinion in the management of the “bottom” based on the synthesis of many studies on the management of the economics and the state. The bottom management is also feedback from the top management.


This book is the first to examine the history of imaginative thinking about intelligent machines. As real artificial intelligence (AI) begins to touch on all aspects of our lives, this long narrative history shapes how the technology is developed, deployed, and regulated. It is therefore a crucial social and ethical issue. Part I of this book provides a historical overview from ancient Greece to the start of modernity. These chapters explore the revealing prehistory of key concerns of contemporary AI discourse, from the nature of mind and creativity to issues of power and rights, from the tension between fascination and ambivalence to investigations into artificial voices and technophobia. Part II focuses on the twentieth and twenty-first centuries in which a greater density of narratives emerged alongside rapid developments in AI technology. These chapters reveal not only how AI narratives have consistently been entangled with the emergence of real robotics and AI, but also how they offer a rich source of insight into how we might live with these revolutionary machines. Through their close textual engagements, these chapters explore the relationship between imaginative narratives and contemporary debates about AI’s social, ethical, and philosophical consequences, including questions of dehumanization, automation, anthropomorphization, cybernetics, cyberpunk, immortality, slavery, and governance. The contributions, from leading humanities and social science scholars, show that narratives about AI offer a crucial epistemic site for exploring contemporary debates about these powerful new technologies.


Sign in / Sign up

Export Citation Format

Share Document