scholarly journals On the issue of ethical aspects of the artificial intelligence systems implementation in healthcare

2021 ◽  
Vol 2 (3) ◽  
pp. 356-368
Author(s):  
Daria E. Sharova ◽  
Viktoria V. Zinchenko ◽  
Ekaterina S. Akhmad ◽  
Olesia A. Mokienko ◽  
Anton V. Vladzymyrskyy ◽  
...  

The article analyzes ethical issues inherent to different life-cycle stages of artificial intelligence systems and provides up-to-date information about global and domestic trends in this area. The international and national experience concerning ethical issues of artificial intelligence systems use in healthcare is described. In addition, the international and national strategies for the development of artificial intelligence in healthcare are analyzed focusing on the national development; and the main trends, similarities, and differences between strategies are identified. Furthermore, ethical components of the process of clinical trials to evaluate the safety and efficacy of artificial intelligence systems in Russia are described. Domestic state-of-the-art and globally unique experience in technical regulation of artificial intelligence systems are shown on unification papers and standardization of requirements for the development, testing, and operation of artificial intelligence systems in healthcare are presented, and unparalleled Russian experience in terms of certification requirements for artificial intelligence-based medical devices is demonstrated. The article also summarizes the main conclusions and emphasizes the importance of a strong successful healthcare system based on artificial intelligence technologies that build trust and compliance with ethical standards.

Author(s):  
Al'bina Slavovna Lolaeva ◽  
Kristina Ushangievna Sakaeva

Ethical norms and the law are indispensably linked in the modern society. The adoption of major legal decisions is affected by various ethical rules. Artificial intelligence transforms the indicated problems into a new dimension. The systems that use artificial intelligence are becoming more autonomous by complexity of the tasks they accomplish, and their potential implications on the external environment. This diminishes the human ability to comprehend, predict, and control their activity. People usually underestimate the actual level of the autonomy of such systems. It is underlined that the machines based on artificial intelligence can learn from the own experience, and perform actions that are not meant by the developers. This leads to certain ethical and legal difficulties that are discussed in this article. In view of the specificity of artificial intelligence, the author makes suggestions on the direct responsibility of particular systems. Based on this logic, there are no fundamental reasons that prevent the autonomous should be held legally accountable for their actions. However, the question on the need or advisability to impose such type of responsibility (at the present stage specifically) remains open. This is partially due to the ethical issues listed above. It might be more effective to hold programmers or users of the autonomous systems accountable for the actions of these systems. However, it may decelerate innovations. This is namely why there is a need to find a perfect balance.


2021 ◽  
pp. 79-89
Author(s):  
E.V. Skurko ◽  

The review examines the ethical aspects of the use of artificial intelligence systems and their legal regulation both at the international legal level and in individual countries and jurisdictions. The key provisions of the EU General data Protection Regulation (GDRP) 2018 and other EU documents are analyzed, in particular, the resolution on civil Law Rules on Robotics 2017 of the European Parliament and the European Commission's communication on Artificial Intelligence for Europe 2018, as well as issues of legal regulation of AI ethics in China, the United States, and other countries.


2020 ◽  
Vol 30 (Supplement_5) ◽  
Author(s):  
N Leventi ◽  
A Vodenitcharova ◽  
K Popova

Abstract Issue Innovative information technologies (IIT) like artificial intelligence (AI), big data, etc. promise to support individual patient care, and promote public health. Their use raises ethical, social and legal issues. Here we demonstrate how the guidelines for trustworthy AI, can assist to answer those ethical issues in the case of clinical trials (CT). Description In 2018 the European Commission established the High-Level Expert Group on Artificial Intelligence (AI HLEG). The group proposed Guidelines to promote Trustworthy AI, with three components, which should be met throughout the system's entire life cycle, as it should be lawful, ethical and robust. Trustworthiness is a prerequisite for people and societies to develop, and use AI systems. We used a focus group methodology to explore how the guidelines for trustworthy AI can assist to answer the ethical issues that rise by the application of AI in CTs. Results The discussion was directed to the seven requirements for trustworthy AI in CTs, by questions like: Are they relevant in CTs as a whole? Would they be applicable to the use of IIT as AI in CTs? Are you currently applying part, or all, of the proposed list? In the future, would you attach some, or all, of the proposed list? Is the administrative burden of applying the requirements justified by the effect? Lessons It was recommended that: the guidelines are relevant in the conduct of the CT; planning and implementation of CTs using IIT, should take them into account; ethical aspects and challenges are of the utmost importance; the proposed list is a very comprehensive framework; particular attention should be paid where more vulnerable groups are affected; the administrative burden is acceptable, as the effect exceeds the resources invested. Key messages IIT are becoming increasingly important in medicine, and requirements for trustworthy IIT, and AI are necessary. Appropriate instrument in the case of the CTs are the provided by AI HLEG guidelines.


Author(s):  
M. G. Koliada ◽  
T. I. Bugayova

The article discusses the history of the development of the problem of using artificial intelligence systems in education and pedagogic. Two directions of its development are shown: “Computational Pedagogic” and “Educational Data Mining”, in which poorly studied aspects of the internal mechanisms of functioning of artificial intelligence systems in this field of activity are revealed. The main task is a problem of interface of a kernel of the system with blocks of pedagogical and thematic databases, as well as with the blocks of pedagogical diagnostics of a student and a teacher. The role of the pedagogical diagnosis as evident reflection of the complex influence of factors and reasons is shown. It provides the intelligent system with operative and reliable information on how various reasons intertwine in the interaction, which of them are dangerous at present, where recession of characteristics of efficiency is planned. All components of the teaching and educational system are subject to diagnosis; without it, it is impossible to own any pedagogical situation optimum. The means in obtaining information about students, as well as the “mechanisms” of work of intelligent systems based on innovative ideas of advanced pedagogical experience in diagnostics of the professionalism of a teacher, are considered. Ways of realization of skill of the teacher on the basis of the ideas developed by the American scientists are shown. Among them, the approaches of researchers D. Rajonz and U. Bronfenbrenner who put at the forefront the teacher’s attitude towards students, their views, intellectual and emotional characteristics are allocated. An assessment of the teacher’s work according to N. Flanders’s system, in the form of the so-called “The Interaction Analysis”, through the mechanism of fixing such elements as: the verbal behavior of the teacher, events at the lesson and their sequence is also proposed. A system for assessing the professionalism of a teacher according to B. O. Smith and M. O. Meux is examined — through the study of the logic of teaching, using logical operations at the lesson. Samples of forms of external communication of the intellectual system with the learning environment are given. It is indicated that the conclusion of the found productive solutions can have the most acceptable and comfortable form both for students and for the teacher in the form of three approaches. The first shows that artificial intelligence in this area can be represented in the form of robotized being in the shape of a person; the second indicates that it is enough to confine oneself only to specially organized input-output systems for targeted transmission of effective methodological recommendations and instructions to both students and teachers; the third demonstrates that life will force one to come up with completely new hybrid forms of interaction between both sides in the form of interactive educational environments, to some extent resembling the educational spaces of virtual reality.


Author(s):  
Natalia V. Vysotskaya ◽  
T. V. Kyrbatskaya

The article is devoted to the consideration of the main directions of digital transformation of the transport industry in Russia. It is proposed in the process of digital transformation to integrate the community approach into the company's business model using blockchain technology and methods and results of data science; complement the new digital culture with a digital team and new communities that help management solve business problems; focus the attention of the company's management on its employees and develop those competencies in them that robots and artificial intelligence systems cannot implement: develop algorithmic, computable and non-linear thinking in all employees of the company.


This book explores the intertwining domains of artificial intelligence (AI) and ethics—two highly divergent fields which at first seem to have nothing to do with one another. AI is a collection of computational methods for studying human knowledge, learning, and behavior, including by building agents able to know, learn, and behave. Ethics is a body of human knowledge—far from completely understood—that helps agents (humans today, but perhaps eventually robots and other AIs) decide how they and others should behave. Despite these differences, however, the rapid development in AI technology today has led to a growing number of ethical issues in a multitude of fields, ranging from disciplines as far-reaching as international human rights law to issues as intimate as personal identity and sexuality. In fact, the number and variety of topics in this volume illustrate the width, diversity of content, and at times exasperating vagueness of the boundaries of “AI Ethics” as a domain of inquiry. Within this discourse, the book points to the capacity of sociotechnical systems that utilize data-driven algorithms to classify, to make decisions, and to control complex systems. Given the wide-reaching and often intimate impact these AI systems have on daily human lives, this volume attempts to address the increasingly complicated relations between humanity and artificial intelligence. It considers not only how humanity must conduct themselves toward AI but also how AI must behave toward humanity.


Author(s):  
Bryant Walker Smith

This chapter highlights key ethical issues in the use of artificial intelligence in transport by using automated driving as an example. These issues include the tension between technological solutions and policy solutions; the consequences of safety expectations; the complex choice between human authority and computer authority; and power dynamics among individuals, governments, and companies. In 2017 and 2018, the U.S. Congress considered automated driving legislation that was generally supported by many of the larger automated-driving developers. However, this automated-driving legislation failed to pass because of a lack of trust in technologies and institutions. Trustworthiness is much more of an ethical question. Automated vehicles will not be driven by individuals or even by computers; they will be driven by companies acting through their human and machine agents. An essential issue for this field—and for artificial intelligence generally—is how the companies that develop and deploy these technologies should earn people’s trust.


Sign in / Sign up

Export Citation Format

Share Document