scholarly journals Four Responsibility Gaps with Artificial Intelligence: Why they Matter and How to Address them

Author(s):  
Filippo Santoni de Sio ◽  
Giulio Mecacci

AbstractThe notion of “responsibility gap” with artificial intelligence (AI) was originally introduced in the philosophical debate to indicate the concern that “learning automata” may make more difficult or impossible to attribute moral culpability to persons for untoward events. Building on literature in moral and legal philosophy, and ethics of technology, the paper proposes a broader and more comprehensive analysis of the responsibility gap. The responsibility gap, it is argued, is not one problem but a set of at least four interconnected problems – gaps in culpability, moral and public accountability, active responsibility—caused by different sources, some technical, other organisational, legal, ethical, and societal. Responsibility gaps may also happen with non-learning systems. The paper clarifies which aspect of AI may cause which gap in which form of responsibility, and why each of these gaps matter. It proposes a critical review of partial and non-satisfactory attempts to address the responsibility gap: those which present it as a new and intractable problem (“fatalism”), those which dismiss it as a false problem (“deflationism”), and those which reduce it to only one of its dimensions or sources and/or present it as a problem that can be solved by simply introducing new technical and/or legal tools (“solutionism”). The paper also outlines a more comprehensive approach to address the responsibility gaps with AI in their entirety, based on the idea of designing socio-technical systems for “meaningful human control", that is systems aligned with the relevant human reasons and capacities.

2021 ◽  
Vol 54 (6) ◽  
pp. 1-35
Author(s):  
Ninareh Mehrabi ◽  
Fred Morstatter ◽  
Nripsuta Saxena ◽  
Kristina Lerman ◽  
Aram Galstyan

With the widespread use of artificial intelligence (AI) systems and applications in our everyday lives, accounting for fairness has gained significant importance in designing and engineering of such systems. AI systems can be used in many sensitive environments to make important and life-changing decisions; thus, it is crucial to ensure that these decisions do not reflect discriminatory behavior toward certain groups or populations. More recently some work has been developed in traditional machine learning and deep learning that address such challenges in different subdomains. With the commercialization of these systems, researchers are becoming more aware of the biases that these applications can contain and are attempting to address them. In this survey, we investigated different real-world applications that have shown biases in various ways, and we listed different sources of biases that can affect AI applications. We then created a taxonomy for fairness definitions that machine learning researchers have defined to avoid the existing bias in AI systems. In addition to that, we examined different domains and subdomains in AI showing what researchers have observed with regard to unfair outcomes in the state-of-the-art methods and ways they have tried to address them. There are still many future directions and solutions that can be taken to mitigate the problem of bias in AI systems. We are hoping that this survey will motivate researchers to tackle these issues in the near future by observing existing work in their respective fields.


Author(s):  
Peter Asaro

As the militaries of technologically advanced nations seek to apply increasingly sophisticated AI and automation to weapons technologies, a host of ethical, legal, social, and political questions arise. Central among these is whether it is ethical to delegate the decision to use lethal force to an autonomous system that is not under meaningful human control. Further questions arise as to who or what could or should be held responsible when lethal force is used improperly by such systems. This chapter argues that current autonomous weapons are not legal or moral agents that can be held morally responsible or legally accountable for their choices and actions, and that therefore humans need to maintain control over such weapons systems.


2020 ◽  
Vol 25 (1) ◽  
pp. 74-88 ◽  
Author(s):  
S Shyam Sundar

Abstract Advances in personalization algorithms and other applications of machine learning have vastly enhanced the ease and convenience of our media and communication experiences, but they have also raised significant concerns about privacy, transparency of technologies and human control over their operations. Going forth, reconciling such tensions between machine agency and human agency will be important in the era of artificial intelligence (AI), as machines get more agentic and media experiences become increasingly determined by algorithms. Theory and research should be geared toward a deeper understanding of the human experience of algorithms in general and the psychology of Human–AI interaction (HAII) in particular. This article proposes some directions by applying the dual-process framework of the Theory of Interactive Media Effects (TIME) for studying the symbolic and enabling effects of the affordances of AI-driven media on user perceptions and experiences.


Robotica ◽  
2009 ◽  
Vol 28 (4) ◽  
pp. 509-516 ◽  
Author(s):  
E. Onieva ◽  
V. Milanés ◽  
C. González ◽  
T. de Pedro ◽  
J. Pérez ◽  
...  

SUMMARYArtificial intelligence techniques applied to control processes are particularly useful when the elements to be controlled are complex and can not be described by a linear model. A trade-off between performance and complexity is the main factor in the design of this kind of system. The use of fuzzy logic is specially indicated when trying to emulate such human control actions as driving a car. This paper presents a fuzzy system that cooperatively controls the throttle and brake pedals for automatic speed control up to 50km/h. It is thus appropriate for populated areas where driving involves constant speed changes, but within a range of low speeds because of traffic jams, road signs, traffic lights, etc. The system gets the current and desired speeds for the car and generates outputs to control the two pedals. It has been implemented in a real car, and tested in real road conditions, showing good speed control with smooth actions resulting in accelerations that are comfortable for the car's occupants.


2020 ◽  
Author(s):  
Zhaoping Xiong ◽  
Ziqiang Cheng ◽  
Chi Xu ◽  
Xinyuan Lin ◽  
Xiaohong Liu ◽  
...  

AbstractArtificial intelligence (AI) models usually require large amounts of high-quality training data, which is in striking contrast to the situation of small and biased data faced by current drug discovery pipelines. The concept of federated learning has been proposed to utilize distributed data from different sources without leaking sensitive information of these data. This emerging decentralized machine learning paradigm is expected to dramatically improve the success of AI-powered drug discovery. We here simulate the federated learning process with 7 aqueous solubility datasets from different sources, among which there are overlapping molecules with high or low biases in the recorded values. Beyond the benefit of gaining more data, we also demonstrate federated training has a regularization effect making it superior than centralized training on the pooled datasets with high biases. Further, two more cases are studied to test the usability of federated learning in drug discovery. Our work demonstrates the application of federated learning in predicting drug related properties, but also highlights its promising role in addressing the small data and biased data dilemma in drug discovery.


2021 ◽  
Vol 5 (1) ◽  
pp. 53-72
Author(s):  
Elke Schwarz

In this article, I explore the (im)possibility of human control and question the presupposition that we can be morally adequately or meaningfully in control over AI-supported LAWS. Taking seriously Wiener’s warning that “machines can and do transcend some of the limitations of their designers and that in doing so they may be both effective and dangerous,” I argue that in the LAWS human-machine complex, technological features and the underlying logic of the AI system progressively close the spaces and limit the capacities required for human moral agency.


Author(s):  
Ilse Verdiesen

Autonomous Weapon Systems (AWS) can be defined as weapons systems equipped with Artificial Intelligence (AI). They are an emerging technology and are increasingly deployed on the battlefield. In the societal debate on Autonomous Weapon Systems, the concept of Meaningful Human Control (MHC) is often mentioned as requirement, but MHC will not suffice as requirement to minimize unintended consequences of Autonomous Weapon Systems, because the definition of ‘control’ implies that one has the power to influence or direct the course of events or the ability to manage a machine. The characteristics autonomy, interactivity and adaptability of AI  in Autonomous Weapon Systems inherently imply that control in strict sense is not possible. Therefore, a different approach is needed to minimize unintended consequences of AWS. Several scholars are describing the concept of Human Oversight in Autonomous Weapon Systems and AI in general. Just recently Taddeo and Floridi (2018) describe that human oversight procedures are necessary to minimize unintended consequences and to compensate unfair impacts of AI. In my PhD project, I will analyse the concepts that are needed to define, model, evaluate and ensure human oversight in Autonomous Weapons and design a technical architecture to implement this.


2020 ◽  
Author(s):  
Mario Arturo Ruiz Estrada ◽  
Minsso LEE

<p>The Googlekonomia is an alternative economic research technique that focuses on searching the best and easy access to a large number of economic database and documents from different sources on the internet. The main objective of Googlekonomia is the technical evaluation of trustworthy economic database and documents from different websites and search engines. Subsequently, the Googlekonomia is able to monitoring, evaluating, and classifying a large number of economic database and documents to study and solve possible economic problems. Finally, the Googlekonomia evaluates a large number of possible economic database and documents access in the internet sources and search engines based on the uses of artificial intelligence and a real-time multi-dimensional graphical modeling approach together.</p>


2020 ◽  
Vol 1 (4) ◽  
pp. 187-194
Author(s):  
Daniele Amoroso ◽  
Guglielmo Tamburrini

Abstract Purpose of Review To provide readers with a compact account of ongoing academic and diplomatic debates about autonomy in weapons systems, that is, about the moral and legal acceptability of letting a robotic system to unleash destructive force in warfare and take attendant life-or-death decisions without any human intervention. Recent Findings A précis of current debates is provided, which focuses on the requirement that all weapons systems, including autonomous ones, should remain under meaningful human control (MHC) in order to be ethically acceptable and lawfully employed. Main approaches to MHC are described and briefly analyzed, distinguishing between uniform, differentiated, and prudential policies for human control on weapons systems. Summary The review highlights the crucial role played by the robotics research community to start ethical and legal debates about autonomy in weapons systems. A concise overview is provided of the main concerns emerging in those early debates: respect of the laws of war, responsibility ascription issues, violation of the human dignity of potential victims of autonomous weapons systems, and increased risks for global stability. It is pointed out that these various concerns have been jointly taken to support the idea that all weapons systems, including autonomous ones, should remain under meaningful human control (MHC). Main approaches to MHC are described and briefly analyzed. Finally, it is emphasized that the MHC idea looms large on shared control policies to adopt in other ethically and legally sensitive application domains for robotics and artificial intelligence.


Author(s):  
Andrew Stranieri ◽  
Zhaohao Sun

This chapter addresses whether AI can understand me. A framework for regulating AI systems that draws on Strawson's moral philosophy and concepts drawn from jurisprudence and theories on regulation is used. This chapter proposes that, as AI algorithms increasingly draw inferences following repeated exposure to big datasets, they have become more sophisticated and rival human reasoning. Their regulation requires that AI systems have agency and are subject to the rulings of courts. Humans sponsor the AI systems for registration with regulatory agencies. This enables judges to make moral culpability decisions by taking the AI system's explanation into account along with the full social context of the misdemeanor. The proposed approach might facilitate the research and development of intelligent analytics, intelligent big data analytics, multiagent systems, artificial intelligence, and data science.


Sign in / Sign up

Export Citation Format

Share Document