AI as a Moral Right-Holder

Author(s):  
John Basl ◽  
Joseph Bowen

This chapter evaluates whether AI systems are or will be rights-holders. It develops a skeptical stance toward the idea that current forms of artificial intelligence are holders of moral rights, beginning with an articulation of one of the most prominent and most plausible theories of moral rights: the Interest Theory of rights. On the Interest Theory, AI systems will be rights-holders only if they have interests or a well-being. Current AI systems are not bearers of well-being, and so fail to meet the necessary condition for being rights-holders. This argument is robust against a range of different objections. However, the chapter also shows why difficulties in assessing whether future AI systems might have interests or be bearers of well-being—and so be rights-holders—raise difficult ethical challenges for certain developments in AI.

Author(s):  
Erik Hermann

AbstractArtificial intelligence (AI) is (re)shaping strategy, activities, interactions, and relationships in business and specifically in marketing. The drawback of the substantial opportunities AI systems and applications (will) provide in marketing are ethical controversies. Building on the literature on AI ethics, the authors systematically scrutinize the ethical challenges of deploying AI in marketing from a multi-stakeholder perspective. By revealing interdependencies and tensions between ethical principles, the authors shed light on the applicability of a purely principled, deontological approach to AI ethics in marketing. To reconcile some of these tensions and account for the AI-for-social-good perspective, the authors make suggestions of how AI in marketing can be leveraged to promote societal and environmental well-being.


2019 ◽  
Vol 10 (1) ◽  
Author(s):  
J. Raymond Geis ◽  
Adrian Brady ◽  
Carol C. Wu ◽  
Jack Spencer ◽  
Erik Ranschaert ◽  
...  

Abstract This is a condensed summary of an international multisociety statement on ethics of artificial intelligence (AI) in radiology produced by the ACR, European Society of Radiology, RSNA, Society for Imaging Informatics in Medicine, European Society of Medical Imaging Informatics, Canadian Association of Radiologists, and American Association of Physicists in Medicine. AI has great potential to increase efficiency and accuracy throughout radiology, but also carries inherent pitfalls and biases. Widespread use of AI-based intelligent and autonomous systems in radiology can increase the risk of systemic errors with high consequence, and highlights complex ethical and societal issues. Currently, there is little experience using AI for patient care in diverse clinical settings. Extensive research is needed to understand how to best deploy AI in clinical practice. This statement highlights our consensus that ethical use of AI in radiology should promote well-being, minimize harm, and ensure that the benefits and harms are distributed among stakeholders in a just manner. We believe AI should respect human rights and freedoms, including dignity and privacy. It should be designed for maximum transparency and dependability. Ultimate responsibility and accountability for AI remains with its human designers and operators for the foreseeable future. The radiology community should start now to develop codes of ethics and practice for AI which promote any use that helps patients and the common good and should block use of radiology data and algorithms for financial gain without those two attributes.


2021 ◽  
pp. 002203452110138
Author(s):  
C.M. Mörch ◽  
S. Atsu ◽  
W. Cai ◽  
X. Li ◽  
S.A. Madathil ◽  
...  

Dentistry increasingly integrates artificial intelligence (AI) to help improve the current state of clinical dental practice. However, this revolutionary technological field raises various complex ethical challenges. The objective of this systematic scoping review is to document the current uses of AI in dentistry and the ethical concerns or challenges they imply. Three health care databases (MEDLINE [PubMed], SciVerse Scopus, and Cochrane Library) and 2 computer science databases (ArXiv, IEEE Xplore) were searched. After identifying 1,553 records, the documents were filtered, and a full-text screening was performed. In total, 178 studies were retained and analyzed by 8 researchers specialized in dentistry, AI, and ethics. The team used Covidence for data extraction and Dedoose for the identification of ethics-related information. PRISMA guidelines were followed. Among the included studies, 130 (73.0%) studies were published after 2016, and 93 (52.2%) were published in journals specialized in computer sciences. The technologies used were neural learning techniques for 75 (42.1%), traditional learning techniques for 76 (42.7%), or a combination of several technologies for 20 (11.2%). Overall, 7 countries contributed to 109 (61.2%) studies. A total of 53 different applications of AI in dentistry were identified, involving most dental specialties. The use of initial data sets for internal validation was reported in 152 (85.4%) studies. Forty-five ethical issues (related to the use AI in dentistry) were reported in 22 (12.4%) studies around 6 principles: prudence (10 times), equity (8), privacy (8), responsibility (6), democratic participation (4), and solidarity (4). The ratio of studies mentioning AI-related ethical issues has remained similar in the past years, showing that there is no increasing interest in the field of dentistry on this topic. This study confirms the growing presence of AI in dentistry and highlights a current lack of information on the ethical challenges surrounding its use. In addition, the scarcity of studies sharing their code could prevent future replications. The authors formulate recommendations to contribute to a more responsible use of AI technologies in dentistry.


Author(s):  
Yoko E. Fukumura ◽  
Julie McLaughlin Gray ◽  
Gale M. Lucas ◽  
Burcin Becerik-Gerber ◽  
Shawn C. Roll

Workplace environments have a significant impact on worker performance, health, and well-being. With machine learning capabilities, artificial intelligence (AI) can be developed to automate individualized adjustments to work environments (e.g., lighting, temperature) and to facilitate healthier worker behaviors (e.g., posture). Worker perspectives on incorporating AI into office workspaces are largely unexplored. Thus, the purpose of this study was to explore office workers’ views on including AI in their office workspace. Six focus group interviews with a total of 45 participants were conducted. Interview questions were designed to generate discussion on benefits, challenges, and pragmatic considerations for incorporating AI into office settings. Sessions were audio-recorded, transcribed, and analyzed using an iterative approach. Two primary constructs emerged. First, participants shared perspectives related to preferences and concerns regarding communication and interactions with the technology. Second, numerous conversations highlighted the dualistic nature of a system that collects large amounts of data; that is, the potential benefits for behavior change to improve health and the pitfalls of trust and privacy. Across both constructs, there was an overarching discussion related to the intersections of AI with the complexity of work performance. Numerous thoughts were shared relative to future AI solutions that could enhance the office workplace. This study’s findings indicate that the acceptability of AI in the workplace is complex and dependent upon the benefits outweighing the potential detriments. Office worker needs are complex and diverse, and AI systems should aim to accommodate individual needs.


2020 ◽  
Vol 4 (Supplement_1) ◽  
pp. 757-757
Author(s):  
Marie Boltz ◽  
Karin Wolf-Ostermann ◽  
Katie Maslow

Abstract Dementia poses a societal challenge that is life-changing not only for persons with dementia (PWD) but also for family members and friends (informal carers) directly involved in the care arrangement. Informal carers (IC) have typically poorer outcomes in terms of well-being, quality of life (QoL), health status, and use of health care resources. Dyads of PWD and IC living with dementia are characterized by strong reciprocal relationships and complex living contexts. Therefore, research should investigate home based dementia caregiving from a dyadic perspective to yield interventions that support the PWD, the IC, and the unit as a whole. However, it is an ongoing challenge to investigate dyadic needs and preferences in daily practice and develop effective interventions. Challenges are related to incomplete understanding of dyadic characteristics, attitudes and beliefs within the dyad, as well as how to adapt research approach to engage and retain the dyad in research. This international symposium will therefore address these issues. The first presentation will describe a typology of dementia care dyad characteristics and needs in Germany. The second presentation will examine the challenges and opportunities associated with recruiting and retaining dementia dyads. The third presentation will explore ethical challenges posed in communication with dyads and possible solutions for the researcher. The final presentation reports on the Meeting Centre Support Program as an example of an effective psychosocial intervention employing research strategies that transcend cultural barriers. Our discussant, Katie Maslow, will synthesize the presentations and lead a discussion of future directions for policy and practice.


Author(s):  
AJung Moon ◽  
Shalaleh Rismani ◽  
H. F. Machiel Van der Loos

Abstract Purpose of Review To summarize the set of roboethics issues that uniquely arise due to the corporeality and physical interaction modalities afforded by robots, irrespective of the degree of artificial intelligence present in the system. Recent Findings One of the recent trends in the discussion of ethics of emerging technologies has been the treatment of roboethics issues as those of “embodied AI,” a subset of AI ethics. In contrast to AI, however, robots leverage human’s natural tendency to be influenced by our physical environment. Recent work in human-robot interaction highlights the impact a robot’s presence, capacity to touch, and move in our physical environment has on people, and helping to articulate the ethical issues particular to the design of interactive robotic systems. Summary The corporeality of interactive robots poses unique sets of ethical challenges. These issues should be considered in the design irrespective of and in addition to the ethics of artificial intelligence implemented in them.


2020 ◽  
Vol 11 (1) ◽  
pp. 228-232
Author(s):  
Adamantios Koumpis ◽  
Thomas Gees

AbstractIn this article, we present our experiences from research into the healthy ageing and well-being of older people and we report on our personal opinions of robots that may help the elderly to have sex and to cope with isolation and loneliness. However, and while there is a growing industry for sex robots and other sex toys and gadgets, there is also a growing concern about the ethics of such an industry. As is the case with pornography, the concept of sex robots may be criticized, yet it has deep roots in human civilization, with erotic depictions that date back to the Palaeolithic and Mesolithic Ages. So the need for an artefact that would offer sexually relevant functionality is not new at all. But what might be new and worrying is the potential for using artificial intelligence in sex robots in ways that might cause a repositioning of our entire value system. Such a threat is not related to the proliferation of sex robots per se but to the use of robots in general and in a variety of other fields of application.


2021 ◽  
pp. 146144482110227
Author(s):  
Erik Hermann

Artificial intelligence (AI) is (re)shaping communication and contributes to (commercial and informational) need satisfaction by means of mass personalization. However, the substantial personalization and targeting opportunities do not come without ethical challenges. Following an AI-for-social-good perspective, the authors systematically scrutinize the ethical challenges of deploying AI for mass personalization of communication content from a multi-stakeholder perspective. The conceptual analysis reveals interdependencies and tensions between ethical principles, which advocate the need of a basic understanding of AI inputs, functioning, agency, and outcomes. By this form of AI literacy, individuals could be empowered to interact with and treat mass-personalized content in a way that promotes individual and social good while preventing harm.


AI & Society ◽  
2021 ◽  
Author(s):  
Nora Fronemann ◽  
Kathrin Pollmann ◽  
Wulf Loh

AbstractTo integrate social robots in real-life contexts, it is crucial that they are accepted by the users. Acceptance is not only related to the functionality of the robot but also strongly depends on how the user experiences the interaction. Established design principles from usability and user experience research can be applied to the realm of human–robot interaction, to design robot behavior for the comfort and well-being of the user. Focusing the design on these aspects alone, however, comes with certain ethical challenges, especially regarding the user’s privacy and autonomy. Based on an example scenario of human–robot interaction in elder care, this paper discusses how established design principles can be used in social robotic design. It then juxtaposes these with ethical considerations such as privacy and user autonomy. Combining user experience and ethical perspectives, we propose adjustments to the original design principles and canvass our own design recommendations for a positive and ethically acceptable social human–robot interaction design. In doing so, we show that positive user experience and ethical design may be sometimes at odds, but can be reconciled in many cases, if designers are willing to adjust and amend time-tested design principles.


2021 ◽  
Author(s):  
Christopher Marshall ◽  
Kate Lanyi ◽  
Rhiannon Green ◽  
Georgie Wilkins ◽  
Fiona Pearson ◽  
...  

BACKGROUND There is increasing need to explore the value of soft-intelligence, leveraged using the latest artificial intelligence (AI) and natural language processing (NLP) techniques, as a source of analysed evidence to support public health research activity and decision-making. OBJECTIVE The aim of this study was to further explore the value of soft-intelligence analysed using AI through a case study, which examined a large collection of UK tweets relating to mental health during the COVID-19 pandemic. METHODS A search strategy comprising a list of terms related to mental health, COVID-19, and lockdown restrictions was developed to prospectively collate relevant tweets via Twitter’s advanced search application programming interface over a 24-week period. We deployed a specialist NLP platform to explore tweet frequency and sentiment across the UK and identify key topics of discussion. A series of keyword filters were used to clean the initial data retrieved and also set up to track specific mental health problems. Qualitative document analysis was carried out to further explore and expand upon the results generated by the NLP platform. All collated tweets were anonymised RESULTS We identified and analysed 286,902 tweets posted from UK user accounts from 23 July 2020 to 6 January 2021. The average sentiment score was 50%, suggesting overall neutral sentiment across all tweets over the study period. Major fluctuations in volume and sentiment appeared to coincide with key changes to any local and/or national social-distancing measures. Tweets around mental health were polarising, discussed with both positive and negative sentiment. Key topics of consistent discussion over the study period included the impact of the pandemic on people’s mental health (both positively and negatively), fear and anxiety over lockdowns, and anger and mistrust toward the government. CONCLUSIONS Through the primary use of an AI-based NLP platform, we were able to rapidly mine and analyse emerging health-related insights from UK tweets into how the pandemic may be impacting people’s mental health and well-being. This type of real-time analysed evidence could act as a useful intelligence source that agencies, local leaders, and health care decision makers can potentially draw from, particularly during a health crisis.


Sign in / Sign up

Export Citation Format

Share Document