Morals & Machines
Latest Publications


TOTAL DOCUMENTS

44
(FIVE YEARS 44)

H-INDEX

0
(FIVE YEARS 0)

Published By Nomos Verlag

2747-5174

2021 ◽  
Vol 1 (1) ◽  
pp. 52-59
Author(s):  
Valentin Jeutner

Quantum computers are legal things which are going to affect our lives in a tangible manner. As such, their operation and development must be regulated and supervised. No doubt, the transformational potential of quantum computing is remarkable. But if it goes unchecked the evelopment of quantum computers is also going to impact social and legal power-relations in a remarkable manner. Legal principles that can guide regulatory action must be developed in order to hedge the risks associated with the development of quantum computing. This article contributes to the development of such principles by proposing the quantum imperative. The quantum imperative provides that regulators and developers must ensure that the development of quantum computers: (1) does not create or exacerbate inequalities, (2) does not undermine individual autonomy, and that it (3) does not occur without consulting those whose interests they affect.


2021 ◽  
Vol 1 (1) ◽  
pp. 76-87
Author(s):  
Alexander Buhmann ◽  
Christian Fieseler

Organizations increasingly delegate agency to artificial intelligence. However, such systems can yield unintended negative effects as they may produce biases against users or reinforce social injustices. What pronounces them as a unique grand challenge, however, are not their potentially problematic outcomes but their fluid design. Machine learning algorithms are continuously evolving; as a result, their functioning frequently remains opaque to humans. In this article, we apply recent work on tackling grand challenges though robust action to assess the potential and obstacles of managing the challenge of algorithmic opacity. We stress that although this approach is fruitful, it can be gainfully complemented by a discussion regarding the accountability and legitimacy of solutions. In our discussion, we extend the robust action approach by linking it to a set of principles that can serve to evaluate organisational approaches of tackling grand challenges with respect to their ability to foster accountable outcomes under the intricate conditions of algorithmic opacity.


2021 ◽  
Vol 1 (1) ◽  
pp. 11-11

2021 ◽  
Vol 1 (1) ◽  
pp. 30-43
Author(s):  
Surjo Soekadar ◽  
Jennifer Chandler ◽  
Marcello Ienca ◽  
Christoph Bublitz

Recent advances in neurotechnology allow for an increasingly tight integration of the human brain and mind with artificial cognitive systems, blending persons with technologies and creating an assemblage that we call a hybrid mind. In some ways the mind has always been a hybrid, emerging from the interaction of biology, culture (including technological artifacts) and the natural environment. However, with the emergence of neurotechnologies enabling bidirectional flows of information between the brain and AI-enabled devices, integrated into mutually adaptive assemblages, we have arrived at a point where the specific examination of this new instantiation of the hybrid mind is essential. Among the critical questions raised by this development are the effects of these devices on the user’s perception of the self, and on the user’s experience of their own mental contents. Questions arise related to the boundaries of the mind and body and whether the hardware and software that are functionally integrated with the body and mind are to be viewed as parts of the person or separate artifacts subject to different legal treatment. Other questions relate to how to attribute responsibility for actions taken as a result of the operations of a hybrid mind, as well as how to settle questions of the privacy and security of information generated and retained within a hybrid mind.


2021 ◽  
Vol 1 (1) ◽  
pp. 24-31
Author(s):  
Thales Bertaglia ◽  
Adrien Dubois ◽  
Catalina Goanta

This short discussion paper addresses how controversy is monetized online by reflecting on a new iteration of the shock value in media production, identified on social media as the ‘clout chasing’ phenomenon. We first exemplify controversial behavior, and subsequently proceed to defining clout chasing, which we discuss this concept in relation to existing frameworks for the understanding of controversy on social media. We then outline what clout chasing entails as a content monetization strategy, and address the risks associated with this approach. In doing so, we introduce the concept of ‘content self-moderation,’ which encompasses how creators use content moderation as a way to hedge monetization risks arising out of their reliance on controversy for economic growth. This concept is discussed in the context of the automated content governance entailed by algorithmic platform architectures, to contribute to existing scholarship on platform governance.


2021 ◽  
Vol 1 (1) ◽  
pp. 86-100
Author(s):  
Sofia Ranchordas

Recent EU legislative and policy initiatives aim to offer flexible, innovation-friendly, and future-proof regulatory frameworks. Key examples are the EU Coordinated Plan on AI and the recently published EU AI Regulation Proposal which refer to the importance of experimenting with regulatory sandboxes so as to balance innovation in AI against its potential risks. Originally developed in the Fintech sector, regulatory sandboxes create a test bed for a selected number of innovative projects, by waiving otherwise applicable rules, guiding compliance, or customizing enforcement. Despite the burgeoning literature on regulatory sandboxes and the regulation of AI, the legal, methodological, and ethical challenges of these anticipatory or, at times, adaptive regulatory frameworks have remained understudied. This exploratory article delves into the some of the benefits and intricacies of allowing for experimental instruments in the context of the regulation of AI. This article’s contribution is twofold: first, it contextualizes the adoption of regulatory sandboxes in the broader discussion on experimental approaches to regulation; second, it offers a reflection on the steps ahead for the design and implementation of AI regulatory sandboxes.


2021 ◽  
Vol 1 (1) ◽  
pp. 9-9

2021 ◽  
Vol 1 (1) ◽  
pp. 86-100
Author(s):  
Sofia Ranchordas

Recent EU legislative and policy initiatives aim to offer flexible, innovation-friendly, and future-proof regulatory frameworks. Key examples are the EU Coordinated Plan on AI and the recently published EU AI Regulation Proposal which refer to the importance of experimenting with regulatory sandboxes so as to balance innovation in AI against its potential risks. Originally developed in the Fintech sector, regulatory sandboxes create a test bed for a selected number of innovative projects, by waiving otherwise applicable rules, guiding compliance, or customizing enforcement. Despite the burgeoning literature on regulatory sandboxes and the regulation of AI, the legal, methodological, and ethical challenges of these anticipatory or, at times, adaptive regulatory frameworks have remained understudied. This exploratory article delves into the some of the benefits and intricacies of allowing for experimental instruments in the context of the regulation of AI. This article’s contribution is twofold: first, it contextualizes the adoption of regulatory sandboxes in the broader discussion on experimental approaches to regulation; second, it offers a reflection on the steps ahead for the design and implementation of AI regulatory sandboxes.


Sign in / Sign up

Export Citation Format

Share Document