scholarly journals TECHNOLOGICAL UTOPIA AND ARTIFICIAL INTELLIGENCE: REPRESENTATION IN THE LITERATURE OF THE EARLY 21st CENTURY (BASED ON THE LIBRETTO OF THE ELECTRONIC OPERA “2032: THE LEGEND OF THE UNFULFILLED FUTURE” BY V. ARGONOV)

2021 ◽  
Vol 15 (3) ◽  
pp. 69-80
Author(s):  
T.A. Zagidulina ◽  

Problem statement. Scientific and technological progress associated with the ubiquitous spread of automation and development of the Internet entailed a transformation in the methods of production and perception of information, which outlined new ethical problems associated with technology and artificial intelligence, represented in fiction.Thus, the purpose of this article lies in identifying the elements of the phenomenon of techno-utopia in literature (based on the libretto of the electronic opera by V. Argonov “2032: The Legend of the Unfulfilled Future”), in determining the place of artificial intelligence in it, as well as in consider-ing options for resolving ethical issues related to this issue.Methodology (materials and methods). The paper demonstrates both traditional literary research methods (literary hermeneutics, analysis of mythopoetics) and philosophical methods (the author relies on the philosophy of technology, the philosophy of transhumanism).Research results. The results of the study are the identification of the genetic connection between techno-utopia and the Soviet utopian project, based, among other things, on Gnosticism, as well as identification of the central place of artificial intelligence in the artistic world of the emerging tech-nological utopia.Conclusions. Reading the text at the profane and religious-mystical levels provides a key to un-derstanding one of the vectors of development of socialist realist literature as utopian. It reveals how Soviet mythology is being transformed, for example, the myth of a large Stalinist family, the image of a leader on the basis of post-Soviet culture.As a result of the interdisciplinary complex analysis of the text, it can be concluded that there is a contradiction between the ideology of the new techno-utopia and the postulates of socialist realism.

2021 ◽  
pp. medethics-2020-107024
Author(s):  
Tom Sorell ◽  
Nasir Rajpoot ◽  
Clare Verrill

This paper explores ethical issues raised by whole slide image-based computational pathology. After briefly giving examples drawn from some recent literature of advances in this field, we consider some ethical problems it might be thought to pose. These arise from (1) the tension between artificial intelligence (AI) research—with its hunger for more and more data—and the default preference in data ethics and data protection law for the minimisation of personal data collection and processing; (2) the fact that computational pathology lends itself to kinds of data fusion that go against data ethics norms and some norms of biobanking; (3) the fact that AI methods are esoteric and produce results that are sometimes unexplainable (the so-called ‘black box’problem) and (4) the fact that computational pathology is particularly dependent on scanning technology manufacturers with interests of their own in profit-making from data collection. We shall suggest that most of these issues are resolvable.


2021 ◽  
Vol 7 (3) ◽  
pp. 539-547
Author(s):  
Yana V. Gaivoronskaya ◽  
Roman I. Dremliuga ◽  
Alexey Y. Mamychev ◽  
Olga I. Miroshnichenko

The research objective of the paper is to generalize the ethical problems associated with the development and implementation of autonomous robotic technologies (autonomous robotic devices, ARD) in the civil and military spheres. Unresolved ethical problems hinder the development of legal regulation of new technologies. The authors propose a typology of ethical problems of digitalization for the purpose of creating legal regulation concerning the use of artificial intelligence (AI) and other technologies. Depending on the scope of social relations covered and the forms of regulation proposed, the authors identified four groups of ethical problems of global digitalization, which are considered in the paper: philosophical, humanitarian, socio-ethical, and ethical-legal problems. It is concluded that the legitimacy of managerial decisions that endow robotic technologies with the potential to make decisions in the civil and military spheres should be determined in terms of ethical principles of regulating such relations.


Author(s):  
Gali Katznelson ◽  
Brandon Chan

Recent developments with artificial intelligence (AI) and cancer care suggest that AI has far reaching implications for the field. Such developments bring with them many ethical challenges for the oncologist. When integrating AI into patient care, oncologists can start with Beauchamp and Childress’ framework of biomedical ethics to consider ethical issues that AI can pose, such as challenges related to informed consent, preventing harm from bias, and the potential to reinforce structural inequities. In using AI, the greatest ethical imperative for the oncologist is to have an in-depth understanding of the technology being used. Understanding the AI being used in patient care will help oncologists navigate the myriad ethical problems associated with it.


E-Management ◽  
2021 ◽  
Vol 4 (1) ◽  
pp. 20-28
Author(s):  
A. S. Lobacheva ◽  
O. V. Sobol

The article reveals the main ethical problems and contradictions associated with the use of artificial intelligence. The paper reveals the concept of “artificial intelligence”. The authors analyse two areas of ethical problems of artificial intelligence: fundamental ideas about the ethics of artificial intelligent systems and the creation of ethical norms.The paper investigates the work of world organizations on the development of ethical standards for the use of artificial intelligence: the Institute of Electrical and Electronics Engineers and UNESCO. The study analyses the main difficulties in the implementation of artificial intelligent systems: the attitude of employees to the use of robots in production activities and the automation of processes that affect their work functions and work organization; ethical issues related to retraining and re-certification of employees in connection with the introduction of new software products and robots; ethical issues in reducing staff as a result of the introduction of artificial intelligence and automation of production and business processes; ethical problems of the processing of personal data of employees, including assessments of their psychological and physical condition, personal qualities and character traits, values  and beliefs by specialized programs based on artificial intelligence, as well as tracking the work of employees; ethical contradictions when using special devices and tracking technologies in robotic technology and modern software products, which also extend to the employees interacting with them.


This book explores the intertwining domains of artificial intelligence (AI) and ethics—two highly divergent fields which at first seem to have nothing to do with one another. AI is a collection of computational methods for studying human knowledge, learning, and behavior, including by building agents able to know, learn, and behave. Ethics is a body of human knowledge—far from completely understood—that helps agents (humans today, but perhaps eventually robots and other AIs) decide how they and others should behave. Despite these differences, however, the rapid development in AI technology today has led to a growing number of ethical issues in a multitude of fields, ranging from disciplines as far-reaching as international human rights law to issues as intimate as personal identity and sexuality. In fact, the number and variety of topics in this volume illustrate the width, diversity of content, and at times exasperating vagueness of the boundaries of “AI Ethics” as a domain of inquiry. Within this discourse, the book points to the capacity of sociotechnical systems that utilize data-driven algorithms to classify, to make decisions, and to control complex systems. Given the wide-reaching and often intimate impact these AI systems have on daily human lives, this volume attempts to address the increasingly complicated relations between humanity and artificial intelligence. It considers not only how humanity must conduct themselves toward AI but also how AI must behave toward humanity.


Author(s):  
Bryant Walker Smith

This chapter highlights key ethical issues in the use of artificial intelligence in transport by using automated driving as an example. These issues include the tension between technological solutions and policy solutions; the consequences of safety expectations; the complex choice between human authority and computer authority; and power dynamics among individuals, governments, and companies. In 2017 and 2018, the U.S. Congress considered automated driving legislation that was generally supported by many of the larger automated-driving developers. However, this automated-driving legislation failed to pass because of a lack of trust in technologies and institutions. Trustworthiness is much more of an ethical question. Automated vehicles will not be driven by individuals or even by computers; they will be driven by companies acting through their human and machine agents. An essential issue for this field—and for artificial intelligence generally—is how the companies that develop and deploy these technologies should earn people’s trust.


Author(s):  
Kenneth S. Pope

This chapter examines how ethical issues are approached differently by two prominent psychological associations, how they are encountered by psychologists, the formal complaints they give rise to, and how they can be approached systematically to avoid missteps. Included are basic assumptions about ethics; the unique approaches to developing a ethics code taken by the American Psychological Association (APA) and the Canadian Psychological Association (CPA), and what each of these two codes provides; empirical data about what ethical problems psychologists encounter and what formal complaints they face; four major sets of ethical issues that are particularly complex and challenging (confidentiality, informed consent, competence, and boundaries); an area of major controversy (clinical psychology and national security); steps in ethical decision-making; and four possible lines of future research.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Kathleen Murphy ◽  
Erica Di Ruggiero ◽  
Ross Upshur ◽  
Donald J. Willison ◽  
Neha Malhotra ◽  
...  

Abstract Background Artificial intelligence (AI) has been described as the “fourth industrial revolution” with transformative and global implications, including in healthcare, public health, and global health. AI approaches hold promise for improving health systems worldwide, as well as individual and population health outcomes. While AI may have potential for advancing health equity within and between countries, we must consider the ethical implications of its deployment in order to mitigate its potential harms, particularly for the most vulnerable. This scoping review addresses the following question: What ethical issues have been identified in relation to AI in the field of health, including from a global health perspective? Methods Eight electronic databases were searched for peer reviewed and grey literature published before April 2018 using the concepts of health, ethics, and AI, and their related terms. Records were independently screened by two reviewers and were included if they reported on AI in relation to health and ethics and were written in the English language. Data was charted on a piloted data charting form, and a descriptive and thematic analysis was performed. Results Upon reviewing 12,722 articles, 103 met the predetermined inclusion criteria. The literature was primarily focused on the ethics of AI in health care, particularly on carer robots, diagnostics, and precision medicine, but was largely silent on ethics of AI in public and population health. The literature highlighted a number of common ethical concerns related to privacy, trust, accountability and responsibility, and bias. Largely missing from the literature was the ethics of AI in global health, particularly in the context of low- and middle-income countries (LMICs). Conclusions The ethical issues surrounding AI in the field of health are both vast and complex. While AI holds the potential to improve health and health systems, our analysis suggests that its introduction should be approached with cautious optimism. The dearth of literature on the ethics of AI within LMICs, as well as in public health, also points to a critical need for further research into the ethical implications of AI within both global and public health, to ensure that its development and implementation is ethical for everyone, everywhere.


2021 ◽  
pp. 002203452110138
Author(s):  
C.M. Mörch ◽  
S. Atsu ◽  
W. Cai ◽  
X. Li ◽  
S.A. Madathil ◽  
...  

Dentistry increasingly integrates artificial intelligence (AI) to help improve the current state of clinical dental practice. However, this revolutionary technological field raises various complex ethical challenges. The objective of this systematic scoping review is to document the current uses of AI in dentistry and the ethical concerns or challenges they imply. Three health care databases (MEDLINE [PubMed], SciVerse Scopus, and Cochrane Library) and 2 computer science databases (ArXiv, IEEE Xplore) were searched. After identifying 1,553 records, the documents were filtered, and a full-text screening was performed. In total, 178 studies were retained and analyzed by 8 researchers specialized in dentistry, AI, and ethics. The team used Covidence for data extraction and Dedoose for the identification of ethics-related information. PRISMA guidelines were followed. Among the included studies, 130 (73.0%) studies were published after 2016, and 93 (52.2%) were published in journals specialized in computer sciences. The technologies used were neural learning techniques for 75 (42.1%), traditional learning techniques for 76 (42.7%), or a combination of several technologies for 20 (11.2%). Overall, 7 countries contributed to 109 (61.2%) studies. A total of 53 different applications of AI in dentistry were identified, involving most dental specialties. The use of initial data sets for internal validation was reported in 152 (85.4%) studies. Forty-five ethical issues (related to the use AI in dentistry) were reported in 22 (12.4%) studies around 6 principles: prudence (10 times), equity (8), privacy (8), responsibility (6), democratic participation (4), and solidarity (4). The ratio of studies mentioning AI-related ethical issues has remained similar in the past years, showing that there is no increasing interest in the field of dentistry on this topic. This study confirms the growing presence of AI in dentistry and highlights a current lack of information on the ethical challenges surrounding its use. In addition, the scarcity of studies sharing their code could prevent future replications. The authors formulate recommendations to contribute to a more responsible use of AI technologies in dentistry.


Author(s):  
Jessica Morley ◽  
Anat Elhalal ◽  
Francesca Garcia ◽  
Libby Kinsey ◽  
Jakob Mökander ◽  
...  

AbstractAs the range of potential uses for Artificial Intelligence (AI), in particular machine learning (ML), has increased, so has awareness of the associated ethical issues. This increased awareness has led to the realisation that existing legislation and regulation provides insufficient protection to individuals, groups, society, and the environment from AI harms. In response to this realisation, there has been a proliferation of principle-based ethics codes, guidelines and frameworks. However, it has become increasingly clear that a significant gap exists between the theory of AI ethics principles and the practical design of AI systems. In previous work, we analysed whether it is possible to close this gap between the ‘what’ and the ‘how’ of AI ethics through the use of tools and methods designed to help AI developers, engineers, and designers translate principles into practice. We concluded that this method of closure is currently ineffective as almost all existing translational tools and methods are either too flexible (and thus vulnerable to ethics washing) or too strict (unresponsive to context). This raised the question: if, even with technical guidance, AI ethics is challenging to embed in the process of algorithmic design, is the entire pro-ethical design endeavour rendered futile? And, if no, then how can AI ethics be made useful for AI practitioners? This is the question we seek to address here by exploring why principles and technical translational tools are still needed even if they are limited, and how these limitations can be potentially overcome by providing theoretical grounding of a concept that has been termed ‘Ethics as a Service.’


Sign in / Sign up

Export Citation Format

Share Document