scholarly journals Leveraging Artificial Intelligence in Marketing for Social Good—An Ethical Perspective

Author(s):  
Erik Hermann

AbstractArtificial intelligence (AI) is (re)shaping strategy, activities, interactions, and relationships in business and specifically in marketing. The drawback of the substantial opportunities AI systems and applications (will) provide in marketing are ethical controversies. Building on the literature on AI ethics, the authors systematically scrutinize the ethical challenges of deploying AI in marketing from a multi-stakeholder perspective. By revealing interdependencies and tensions between ethical principles, the authors shed light on the applicability of a purely principled, deontological approach to AI ethics in marketing. To reconcile some of these tensions and account for the AI-for-social-good perspective, the authors make suggestions of how AI in marketing can be leveraged to promote societal and environmental well-being.

2021 ◽  
pp. 146144482110227
Author(s):  
Erik Hermann

Artificial intelligence (AI) is (re)shaping communication and contributes to (commercial and informational) need satisfaction by means of mass personalization. However, the substantial personalization and targeting opportunities do not come without ethical challenges. Following an AI-for-social-good perspective, the authors systematically scrutinize the ethical challenges of deploying AI for mass personalization of communication content from a multi-stakeholder perspective. The conceptual analysis reveals interdependencies and tensions between ethical principles, which advocate the need of a basic understanding of AI inputs, functioning, agency, and outcomes. By this form of AI literacy, individuals could be empowered to interact with and treat mass-personalized content in a way that promotes individual and social good while preventing harm.


2020 ◽  
Vol 31 (2) ◽  
pp. 74-87 ◽  
Author(s):  
Keng Siau ◽  
Weiyu Wang

Artificial intelligence (AI)-based technology has achieved many great things, such as facial recognition, medical diagnosis, and self-driving cars. AI promises enormous benefits for economic growth, social development, as well as human well-being and safety improvement. However, the low-level of explainability, data biases, data security, data privacy, and ethical problems of AI-based technology pose significant risks for users, developers, humanity, and societies. As AI advances, one critical issue is how to address the ethical and moral challenges associated with AI. Even though the concept of “machine ethics” was proposed around 2006, AI ethics is still in the infancy stage. AI ethics is the field related to the study of ethical issues in AI. To address AI ethics, one needs to consider the ethics of AI and how to build ethical AI. Ethics of AI studies the ethical principles, rules, guidelines, policies, and regulations that are related to AI. Ethical AI is an AI that performs and behaves ethically. One must recognize and understand the potential ethical and moral issues that may be caused by AI to formulate the necessary ethical principles, rules, guidelines, policies, and regulations for AI (i.e., Ethics of AI). With the appropriate ethics of AI, one can then build AI that exhibits ethical behavior (i.e., Ethical AI). This paper will discuss AI ethics by looking at the ethics of AI and ethical AI. What are the perceived ethical and moral issues with AI? What are the general and common ethical principles, rules, guidelines, policies, and regulations that can resolve or at least attenuate these ethical and moral issues with AI? What are some of the necessary features and characteristics of an ethical AI? How to adhere to the ethics of AI to build ethical AI?


2021 ◽  
Vol 29 ◽  
Author(s):  
Catharina Rudschies ◽  
Ingrid Schneider ◽  
Judith Simon

In the current debate on the ethics of Artificial Intelligence (AI) much attention has been paid to find some “common ground” in the numerous AI ethics guidelines. The divergences, however, are equally important as they shed light on the conflicts and controversies that require further debate. This paper analyses the AI ethics landscape with a focus on divergences across actor types (public, expert, and private actors). It finds that the differences in actors’ priorities for ethical principles influence the overall outcome of the debate. It shows that determining “minimum requirements” or “primary principles” on the basis of frequency excludes many principles that are subject to controversy, but might still be ethically relevant. The results are discussed in the light of value pluralism, suggesting that the plurality of sets of principles must be acknowledged and can be used to further the debate.


Author(s):  
AJung Moon ◽  
Shalaleh Rismani ◽  
H. F. Machiel Van der Loos

Abstract Purpose of Review To summarize the set of roboethics issues that uniquely arise due to the corporeality and physical interaction modalities afforded by robots, irrespective of the degree of artificial intelligence present in the system. Recent Findings One of the recent trends in the discussion of ethics of emerging technologies has been the treatment of roboethics issues as those of “embodied AI,” a subset of AI ethics. In contrast to AI, however, robots leverage human’s natural tendency to be influenced by our physical environment. Recent work in human-robot interaction highlights the impact a robot’s presence, capacity to touch, and move in our physical environment has on people, and helping to articulate the ethical issues particular to the design of interactive robotic systems. Summary The corporeality of interactive robots poses unique sets of ethical challenges. These issues should be considered in the design irrespective of and in addition to the ethics of artificial intelligence implemented in them.


Author(s):  
Anri Leimanis

Advances in Artificial Intelligence (AI) applications to education have encouraged an extensive global discourse on the underlying ethical principles and values. In a response numerous research institutions, companies, public agencies and non-governmental entities around the globe have published their own guidelines and / or policies for ethical AI. Even though the aim for most of the guidelines is to maximize the benefits that AI delivers to education, the policies differ significantly in content as well as application. In order to facilitate further discussion about the ethical principles, responsibilities of educational institutions using AI and to potentially arrive at a consensus concerning safe and desirable uses of AI in education, this paper performs an evaluation of the self-imposed AI ethics guidelines identifying the common principles and approaches as well as drawbacks limiting the practical and legal application of the policies.


2021 ◽  
Vol 27 (1) ◽  
Author(s):  
Charlotte Stix

AbstractIn the development of governmental policy for artificial intelligence (AI) that is informed by ethics, one avenue currently pursued is that of drawing on “AI Ethics Principles”. However, these AI Ethics Principles often fail to be actioned in governmental policy. This paper proposes a novel framework for the development of ‘Actionable Principles for AI’. The approach acknowledges the relevance of AI Ethics Principles and homes in on methodological elements to increase their practical implementability in policy processes. As a case study, elements are extracted from the development process of the Ethics Guidelines for Trustworthy AI of the European Commission’s “High Level Expert Group on AI”. Subsequently, these elements are expanded on and evaluated in light of their ability to contribute to a prototype framework for the development of 'Actionable Principles for AI'. The paper proposes the following three propositions for the formation of such a prototype framework: (1) preliminary landscape assessments; (2) multi-stakeholder participation and cross-sectoral feedback; and, (3) mechanisms to support implementation and operationalizability.


Author(s):  
John Basl ◽  
Joseph Bowen

This chapter evaluates whether AI systems are or will be rights-holders. It develops a skeptical stance toward the idea that current forms of artificial intelligence are holders of moral rights, beginning with an articulation of one of the most prominent and most plausible theories of moral rights: the Interest Theory of rights. On the Interest Theory, AI systems will be rights-holders only if they have interests or a well-being. Current AI systems are not bearers of well-being, and so fail to meet the necessary condition for being rights-holders. This argument is robust against a range of different objections. However, the chapter also shows why difficulties in assessing whether future AI systems might have interests or be bearers of well-being—and so be rights-holders—raise difficult ethical challenges for certain developments in AI.


AI and Ethics ◽  
2020 ◽  
Author(s):  
Emre Kazim ◽  
Adriano Koshiyama

AbstractIn the growing literature on artificial intelligence (AI) impact assessments, the literature on data protection impact assessments is heavily referenced. Given the relative maturity of the data protection debate and that it has translated into legal codification, it is indeed a natural place to start for AI. In this article, we anticipate directions in what we believe will become a dominant and impactful forthcoming debate, namely, how to conceptualise the relationship between data protection and AI impact. We begin by discussing the value canvas i.e. the ethical principles that underpin data and AI ethics, and discuss how these are instantiated in the context of value trade-offs when the ethics are applied. Following this, we map three kinds of relationships that can be envisioned between data and AI ethics, and then close with a discussion of asymmetry in value trade-offs when privacy and fairness are concerned.


2020 ◽  
Author(s):  
Zhaohui Su ◽  
Dean McDonnell ◽  
Barry L Bentley ◽  
Jiguang He ◽  
Feng Shi ◽  
...  

BACKGROUND With advances in science and technology, biotechnology is becoming more accessible to people of all demographics. These advances inevitably hold the promise to improve personal and population well-being and welfare substantially. It is paradoxical that while greater access to biotechnology on a population level has many advantages, it may also increase the likelihood and frequency of biodisasters due to accidental or malicious use. Similar to “Disease X” (describing unknown naturally emerging pathogenic diseases with a pandemic potential), we term this unknown risk from biotechnologies “Biodisaster X.” To date, no studies have examined the potential role of information technologies in preventing and mitigating Biodisaster X. OBJECTIVE This study aimed to explore (1) what Biodisaster X might entail and (2) solutions that use artificial intelligence (AI) and emerging 6G technologies to help monitor and manage Biodisaster X threats. METHODS A review of the literature on applying AI and 6G technologies for monitoring and managing biodisasters was conducted on PubMed, using articles published from database inception through to November 16, 2020. RESULTS Our findings show that Biodisaster X has the potential to upend lives and livelihoods and destroy economies, essentially posing a looming risk for civilizations worldwide. To shed light on Biodisaster X threats, we detailed effective AI and 6G-enabled strategies, ranging from natural language processing to deep learning–based image analysis to address issues ranging from early Biodisaster X detection (eg, identification of suspicious behaviors), remote design and development of pharmaceuticals (eg, treatment development), and public health interventions (eg, reactive shelter-at-home mandate enforcement), as well as disaster recovery (eg, sentiment analysis of social media posts to shed light on the public’s feelings and readiness for recovery building). CONCLUSIONS Biodisaster X is a looming but avoidable catastrophe. Considering the potential human and economic consequences Biodisaster X could cause, actions that can effectively monitor and manage Biodisaster X threats must be taken promptly and proactively. Rather than solely depending on overstretched professional attention of health experts and government officials, it is perhaps more cost-effective and practical to deploy technology-based solutions to prevent and control Biodisaster X threats. This study discusses what Biodisaster X could entail and emphasizes the importance of monitoring and managing Biodisaster X threats by AI techniques and 6G technologies. Future studies could explore how the convergence of AI and 6G systems may further advance the preparedness for high-impact, less likely events beyond Biodisaster X.


Author(s):  
Афина Караджоянни

Artificial Intelligence (AI) regulatory and other governance mechanisms have only started to emerge and consolidate. Therefore, AI regulation, legislation, frameworks, and guidelines are presently fragmented, isolated, or co-exist in an opaque space between national governments, international bodies, corporations, practitioners, think-tanks, and civil society organisations. This article proposes a research design set up to address this problem by directly collaborating with targeted actors to identify principles for AI that are trustworthy, accountable, safe, fair, non-discriminatory, and which puts human rights and the social good at the centre of its approach. It proposes 21 interlinked substudies, focusing on the ethical judgements, empirical statements, and practical guidelines, which manufacture ethicopolitical visions and AI policies across four domains: seven tech corporations, seven governments, seven civil society actors, together with the analysis of online public debates. The proposed research design uses multiple research techniques: extensive mapping and studies of AI ethics policy documents and 120 interviews of key individuals, as well as assorted analyses of public feedback discussion loops on AI, employing digital methods on online communities specialising in AI debates. It considers novel conceptual interactions communicated across the globe, expands the regulatory, ethics, and technological foresight, both at the individual level (autonomy, identity, dignity, privacy, and data protection) and the societal level (fairness/equality, responsibility, accountability and transparency, surveillance/datafication, democracy and trust, collective humanity and the common good). By producing an innovative, intercontinental, multidisciplinary research design for an Ethical AI Standard, this article offers a concrete plan to search for the Holy Grail of Artificial Intelligence: Its Ethics.


Sign in / Sign up

Export Citation Format

Share Document