scholarly journals AI Ethics in Computational Psychiatry: From the Neuroscience of Consciousness to the Ethics of Consciousness

2021 ◽  
pp. 113704
Author(s):  
Wanja Wiese ◽  
Karl J. Friston
2017 ◽  
Vol 65 (1) ◽  
pp. 21-26 ◽  
Author(s):  
Quentin J. M. Huys

Zusammenfassung. „Computational Psychiatry“ ist eine neue Forschungsrichtung, die Fortschritte aus den theoretischen und experimentellen Neurowissenschaften in klinische Anwendungen für die Psychiatrie umzusetzen will. Der mögliche Nutzen mathematischer Modelle für psychiatrische Anwendungen ergibt sich vor allem aus der Komplexität psychiatrischer Phänomene, deren Beherrschung neue analytische Herangehensweisen erfordert. Konkret können mithilfe solcher Modelle erstens innerpsychische und ansonsten nicht direkt messbare Prozesse erfasst werden. Ein Beispiel hierfür sind Lernprozesse. Zweitens können Phänomene auf verschiedenen Ebenen quantitativ miteinander in Verbindung gebracht werden, z.B. der Effekt von Ionenkanalstörungen auf das Kurzzeitgedächtnis. Drittens können Methoden aus dem maschinellen Lernen mit diesen Modellen verbunden werden, um grosse Datensätze zu analysieren. Obwohl erste Ansätze aus dieser Forschung schon möglichen klinischen Nutzen erwiesen haben, ist das Feld noch jung. Der Artikel schliesst mit dem Vorschlag, Prozeduren aus der Entwicklung pharmazeutischer Produkte für die Validierung theoretischer Anwendungen herbeizuziehen.


J-Institute ◽  
2020 ◽  
Vol 5 (2) ◽  
pp. 34-41
Author(s):  
Eunsook Seo ◽  
Gyunyeol Park
Keyword(s):  

J-Institute ◽  
2020 ◽  
Vol 5 (2) ◽  
pp. 27-33
Author(s):  
Yi Li ◽  
Gyunyeol Park
Keyword(s):  

This book explores the intertwining domains of artificial intelligence (AI) and ethics—two highly divergent fields which at first seem to have nothing to do with one another. AI is a collection of computational methods for studying human knowledge, learning, and behavior, including by building agents able to know, learn, and behave. Ethics is a body of human knowledge—far from completely understood—that helps agents (humans today, but perhaps eventually robots and other AIs) decide how they and others should behave. Despite these differences, however, the rapid development in AI technology today has led to a growing number of ethical issues in a multitude of fields, ranging from disciplines as far-reaching as international human rights law to issues as intimate as personal identity and sexuality. In fact, the number and variety of topics in this volume illustrate the width, diversity of content, and at times exasperating vagueness of the boundaries of “AI Ethics” as a domain of inquiry. Within this discourse, the book points to the capacity of sociotechnical systems that utilize data-driven algorithms to classify, to make decisions, and to control complex systems. Given the wide-reaching and often intimate impact these AI systems have on daily human lives, this volume attempts to address the increasingly complicated relations between humanity and artificial intelligence. It considers not only how humanity must conduct themselves toward AI but also how AI must behave toward humanity.


2021 ◽  
Vol 13 (15) ◽  
pp. 8503
Author(s):  
Henrik Skaug Sætra

Artificial intelligence (AI) now permeates all aspects of modern society, and we are simultaneously seeing an increased focus on issues of sustainability in all human activities. All major corporations are now expected to account for their environmental and social footprint and to disclose and report on their activities. This is carried out through a diverse set of standards, frameworks, and metrics related to what is referred to as ESG (environment, social, governance), which is now, increasingly often, replacing the older term CSR (corporate social responsibility). The challenge addressed in this article is that none of these frameworks sufficiently capture the nature of the sustainability related impacts of AI. This creates a situation in which companies are not incentivised to properly analyse such impacts. Simultaneously, it allows the companies that are aware of negative impacts to not disclose them. This article proposes a framework for evaluating and disclosing ESG related AI impacts based on the United Nation’s Sustainable Development Goals (SDG). The core of the framework is here presented, with examples of how it forces an examination of micro, meso, and macro level impacts, a consideration of both negative and positive impacts, and accounting for ripple effects and interlinkages between the different impacts. Such a framework helps make analyses of AI related ESG impacts more structured and systematic, more transparent, and it allows companies to draw on research in AI ethics in such evaluations. In the closing section, Microsoft’s sustainability reporting from 2018 and 2019 is used as an example of how sustainability reporting is currently carried out, and how it might be improved by using the approach here advocated.


AI and Ethics ◽  
2021 ◽  
Author(s):  
Robert Hanna ◽  
Emre Kazim

AbstractAI Ethics is a burgeoning and relatively new field that has emerged in response to growing concerns about the impact of artificial intelligence (AI) on human individuals and their social institutions. In turn, AI ethics is a part of the broader field of digital ethics, which addresses similar concerns generated by the development and deployment of new digital technologies. Here, we tackle the important worry that digital ethics in general, and AI ethics in particular, lack adequate philosophical foundations. In direct response to that worry, we formulate and rationally justify some basic concepts and principles for digital ethics/AI ethics, all drawn from a broadly Kantian theory of human dignity. Our argument, which is designed to be relatively compact and easily accessible, is presented in ten distinct steps: (1) what “digital ethics” and “AI ethics” mean, (2) refuting the dignity-skeptic, (3) the metaphysics of human dignity, (4) human happiness or flourishing, true human needs, and human dignity, (5) our moral obligations with respect to all human real persons, (6) what a natural automaton or natural machine is, (7) why human real persons are not natural automata/natural machines: because consciousness is a form of life, (8) our moral obligations with respect to the design and use of artificial automata or artificial machines, aka computers, and digital technology more generally, (9) what privacy is, why invasions of digital privacy are morally impermissible, whereas consensual entrances into digital privacy are either morally permissible or even obligatory, and finally (10) dignitarian morality versus legality, and digital ethics/AI ethics. We conclude by asserting our strongly-held belief that a well-founded and generally-accepted dignitarian digital ethics/AI ethics is of global existential importance for humanity.


Author(s):  
Jessica Morley ◽  
Anat Elhalal ◽  
Francesca Garcia ◽  
Libby Kinsey ◽  
Jakob Mökander ◽  
...  

AbstractAs the range of potential uses for Artificial Intelligence (AI), in particular machine learning (ML), has increased, so has awareness of the associated ethical issues. This increased awareness has led to the realisation that existing legislation and regulation provides insufficient protection to individuals, groups, society, and the environment from AI harms. In response to this realisation, there has been a proliferation of principle-based ethics codes, guidelines and frameworks. However, it has become increasingly clear that a significant gap exists between the theory of AI ethics principles and the practical design of AI systems. In previous work, we analysed whether it is possible to close this gap between the ‘what’ and the ‘how’ of AI ethics through the use of tools and methods designed to help AI developers, engineers, and designers translate principles into practice. We concluded that this method of closure is currently ineffective as almost all existing translational tools and methods are either too flexible (and thus vulnerable to ethics washing) or too strict (unresponsive to context). This raised the question: if, even with technical guidance, AI ethics is challenging to embed in the process of algorithmic design, is the entire pro-ethical design endeavour rendered futile? And, if no, then how can AI ethics be made useful for AI practitioners? This is the question we seek to address here by exploring why principles and technical translational tools are still needed even if they are limited, and how these limitations can be potentially overcome by providing theoretical grounding of a concept that has been termed ‘Ethics as a Service.’


Sign in / Sign up

Export Citation Format

Share Document