Thinking Machines

2022 ◽  
pp. 238-258
Author(s):  
Robin Craig

This chapter investigates ethical questions surrounding the possible future emergence of self-aware artificial intelligence (AI). Current research into ethical AI and how this might be applied or extended to future AI is discussed. It is argued that the development of self-aware machines, or their functional equivalents, is possible in principle, and so questions of their ethical status are important. The importance of an objective, reality-based ethics in maintaining human-friendly AI is identified. It is proposed that the conditional nature of life and the value of reason provide the basis of an objective ethics, whose implications include rights to life and liberty, and which apply equally to humans and self-aware machines. Crucial to the development of human-friendly AI will be research on encoding correct rules of reasoning into AI and, using that, validating objective ethics and determining to what extent they will apply to and be followed voluntarily by self-aware machines.

2021 ◽  
Vol 90 (2) ◽  
pp. e513
Author(s):  
Tomasz Piotrowski ◽  
Joanna Kazmierska ◽  
Mirosława Mocydlarz-Adamcewicz ◽  
Adam Ryczkowski

Background. This paper evaluates the status of reporting information related to the usage and ethical issues of artificial intelligence (AI) procedures in clinical trial (CT) papers focussed on radiology issues as well as other (non-trial) original radiology articles (OA). Material and Methods. The evaluation was performed by three independent observers who were, respectively physicist, physician and computer scientist. The analysis was performed for two groups of publications, i.e., for CT and OA. Each group included 30 papers published from 2018 to 2020, published before guidelines proposed by Liu et al. (Nat Med. 2020; 26:1364-1374). The set of items used to catalogue and to verify the ethical status of the AI reporting was developed using the above-mentioned guidelines. Results. Most of the reviewed studies, clearly stated their use of AI methods and more importantly, almost all tried to address relevant clinical questions. Although in most of the studies, patient inclusion and exclusion criteria were presented, the widespread lack of rigorous descriptions of the study design apart from a detailed explanation of the AI approach itself is noticeable. Few of the chosen studies provided information about anonymization of data and the process of secure data sharing. Only a few studies explore the patterns of incorrect predictions by the proposed AI tools and their possible reasons. Conclusion. Results of review support idea of implementation of uniform guidelines for designing and reporting studies with use of AI tools. Such guidelines help to design robust, transparent and reproducible tools for use in real life.


2020 ◽  
Vol 1 ◽  
pp. 77-86
Author(s):  
Stoyko Petkov

The article discusses the rapidly evolving capabilities and growing presence of Artificial Intelligence (AI) based systems through which synthetic media content is created. Although many organizations use the ability to generate synthetic media content for legitimate use, at the same time, there has been an increase in published manipulative and misleading media content intended for fraud, extortion or other unethical purposes. Artificially created content is useful, on the one hand, for projects in which it is used for voice recovery or missing information, and on the other hand it is dangerous when it is used to replace objective reality or to spread disinformation.


Author(s):  
David L. Poole ◽  
Alan K. Mackworth

Sign in / Sign up

Export Citation Format

Share Document