scholarly journals Conclusion

Author(s):  
Bernd Carsten Stahl

AbstractThe conclusion briefly summarises the main arguments of the book. It focuses on the requirements for mitigation options to be used to address the ethical and human rights concerns of artificial intelligence. It also provides a high-level overview of the main recommendations brought forth in the book. It thereby shows how conceptual and empirical insights into the nature of AI, the ethical issues thus raised and the mitigation strategies currently being discussed can be used to develop practically relevant conclusions. These conclusions and recommendations help to ensure that AI ecosystems are developed and shaped in ways that are conducive to human flourishing.

Author(s):  
Bernd Carsten Stahl

AbstractThe introductory chapter describes the motivation behind this book and provides a brief outline of the main argument. The book offers a novel categorisation of artificial intelligence that lends itself to a classification of ethical and human rights issues raised by AI technologies. It offers an ethical approach based on the concept of human flourishing. Following a review of currently discussed ways of addressing and mitigating ethical issues, the book analyses the metaphor of AI ecosystems. Taking the ecosystems metaphor seriously allows the identification of requirements that mitigation measures need to fulfil. On the basis of these requirements the book offers a set of recommendations that allow AI ecosystems to be shaped in ways that promote human flourishing.


2021 ◽  
Vol 5 (S1) ◽  
pp. 37-45
Author(s):  
Elena S. Danilova ◽  
Elena V. Pupynina ◽  
Yulia A. Drygina ◽  
Vladimir S. Pugach ◽  
Oxana V. Markelova

The paper focuses on English lexemes used in mass media publications about a new security development. The use of artificial intelligence for facial recognition and enhanced surveillance of citizens pose several ethical issues discussed in major broadsheet newspapers. Studies into the evaluation as a cognitive category have been used as the theoretical basis of the research. The contexts revealed lexical units displaying evaluation of surveillance and human rights issues. The lexemes fall within three semantic groups. Negative connotations are connected with personal experience or associations, as well as with human rights breaches, while advantages tend to be described with verbs denoting purpose. The use of AI is a highly controversial issue that deserves cross-disciplinary consideration.


AI & Society ◽  
2021 ◽  
Author(s):  
Bernd Carsten Stahl ◽  
Josephina Antoniou ◽  
Mark Ryan ◽  
Kevin Macnish ◽  
Tilimbe Jiya

AbstractThe ethics of artificial intelligence (AI) is a widely discussed topic. There are numerous initiatives that aim to develop the principles and guidance to ensure that the development, deployment and use of AI are ethically acceptable. What is generally unclear is how organisations that make use of AI understand and address these ethical issues in practice. While there is an abundance of conceptual work on AI ethics, empirical insights are rare and often anecdotal. This paper fills the gap in our current understanding of how organisations deal with AI ethics by presenting empirical findings collected using a set of ten case studies and providing an account of the cross-case analysis. The paper reviews the discussion of ethical issues of AI as well as mitigation strategies that have been proposed in the literature. Using this background, the cross-case analysis categorises the organisational responses that were observed in practice. The discussion shows that organisations are highly aware of the AI ethics debate and keen to engage with ethical issues proactively. However, they make use of only a relatively small subsection of the mitigation strategies proposed in the literature. These insights are of importance to organisations deploying or using AI, to the academic AI ethics debate, but maybe most valuable to policymakers involved in the current debate about suitable policy developments to address the ethical issues raised by AI.


Author(s):  
Bernd Carsten Stahl

AbstractThis chapter reviews the proposals that have been put forward to address ethical issues of AI. It divides them into policy-level proposals, organisational responses and guidance for individuals. It discusses how these mitigation options are reflected in the case studies exemplifying the social reality of AI ethics. The chapter concludes with an overview of the stakeholder groups affected by AI, many of whom play a role in implementing the mitigation strategies and addressing ethical issues in AI.


2020 ◽  
Vol 30 (Supplement_5) ◽  
Author(s):  
N Leventi ◽  
A Vodenitcharova ◽  
K Popova

Abstract Issue Innovative information technologies (IIT) like artificial intelligence (AI), big data, etc. promise to support individual patient care, and promote public health. Their use raises ethical, social and legal issues. Here we demonstrate how the guidelines for trustworthy AI, can assist to answer those ethical issues in the case of clinical trials (CT). Description In 2018 the European Commission established the High-Level Expert Group on Artificial Intelligence (AI HLEG). The group proposed Guidelines to promote Trustworthy AI, with three components, which should be met throughout the system's entire life cycle, as it should be lawful, ethical and robust. Trustworthiness is a prerequisite for people and societies to develop, and use AI systems. We used a focus group methodology to explore how the guidelines for trustworthy AI can assist to answer the ethical issues that rise by the application of AI in CTs. Results The discussion was directed to the seven requirements for trustworthy AI in CTs, by questions like: Are they relevant in CTs as a whole? Would they be applicable to the use of IIT as AI in CTs? Are you currently applying part, or all, of the proposed list? In the future, would you attach some, or all, of the proposed list? Is the administrative burden of applying the requirements justified by the effect? Lessons It was recommended that: the guidelines are relevant in the conduct of the CT; planning and implementation of CTs using IIT, should take them into account; ethical aspects and challenges are of the utmost importance; the proposed list is a very comprehensive framework; particular attention should be paid where more vulnerable groups are affected; the administrative burden is acceptable, as the effect exceeds the resources invested. Key messages IIT are becoming increasingly important in medicine, and requirements for trustworthy IIT, and AI are necessary. Appropriate instrument in the case of the CTs are the provided by AI HLEG guidelines.


Author(s):  
Lisa Forsberg

Anti-libidinal interventions (ALIs) are a type of crime-preventing neurointervention (CPN) already in use in many jurisdictions. This chapter examines different types of legal regimes under which ALIs might be provided to sex offenders. The types of legal regimes examined are dedicated statutes that directly provide for ALI use, consensual ALI provision under general medical law principles, mental health legislation providing for ALI use (exemplified by the mental health regime in England and Wales), and European human rights law as it pertains to ALI provision. The chapter considers what we might learn from ALIs in respect of likely or possible arrangements for the provision of other CPNs, and draws attention to some ethical issues raised by each of these types of regime, worth keeping in mind when considering arrangements for CPN provision.


This book explores the intertwining domains of artificial intelligence (AI) and ethics—two highly divergent fields which at first seem to have nothing to do with one another. AI is a collection of computational methods for studying human knowledge, learning, and behavior, including by building agents able to know, learn, and behave. Ethics is a body of human knowledge—far from completely understood—that helps agents (humans today, but perhaps eventually robots and other AIs) decide how they and others should behave. Despite these differences, however, the rapid development in AI technology today has led to a growing number of ethical issues in a multitude of fields, ranging from disciplines as far-reaching as international human rights law to issues as intimate as personal identity and sexuality. In fact, the number and variety of topics in this volume illustrate the width, diversity of content, and at times exasperating vagueness of the boundaries of “AI Ethics” as a domain of inquiry. Within this discourse, the book points to the capacity of sociotechnical systems that utilize data-driven algorithms to classify, to make decisions, and to control complex systems. Given the wide-reaching and often intimate impact these AI systems have on daily human lives, this volume attempts to address the increasingly complicated relations between humanity and artificial intelligence. It considers not only how humanity must conduct themselves toward AI but also how AI must behave toward humanity.


Author(s):  
Bryant Walker Smith

This chapter highlights key ethical issues in the use of artificial intelligence in transport by using automated driving as an example. These issues include the tension between technological solutions and policy solutions; the consequences of safety expectations; the complex choice between human authority and computer authority; and power dynamics among individuals, governments, and companies. In 2017 and 2018, the U.S. Congress considered automated driving legislation that was generally supported by many of the larger automated-driving developers. However, this automated-driving legislation failed to pass because of a lack of trust in technologies and institutions. Trustworthiness is much more of an ethical question. Automated vehicles will not be driven by individuals or even by computers; they will be driven by companies acting through their human and machine agents. An essential issue for this field—and for artificial intelligence generally—is how the companies that develop and deploy these technologies should earn people’s trust.


Author(s):  
Andrea Renda

This chapter assesses Europe’s efforts in developing a full-fledged strategy on the human and ethical implications of artificial intelligence (AI). The strong focus on ethics in the European Union’s AI strategy should be seen in the context of an overall strategy that aims at protecting citizens and civil society from abuses of digital technology but also as part of a competitiveness-oriented strategy aimed at raising the standards for access to Europe’s wealthy Single Market. In this context, one of the most peculiar steps in the European Union’s strategy was the creation of an independent High-Level Expert Group on AI (AI HLEG), accompanied by the launch of an AI Alliance, which quickly attracted several hundred participants. The AI HLEG, a multistakeholder group including fifty-two experts, was tasked with the definition of Ethics Guidelines as well as with the formulation of “Policy and Investment Recommendations.” With the advice of the AI HLEG, the European Commission put forward ethical guidelines for Trustworthy AI—which are now paving the way for a comprehensive, risk-based policy framework.


Sign in / Sign up

Export Citation Format

Share Document