How Futures Studies and Foresight Could Address Ethical Dilemmas of Machine Learning and Artificial Intelligence

2019 ◽  
Vol 12 (2) ◽  
pp. 169-180
Author(s):  
Alejandro Díaz-Domínguez

Drawing from ethical concerns raised by communities of machine learning developers and considering predictive analytics’ very short-term predictions, several futures studies techniques are examined to offer some insights about possible bridges between machine learning and foresight. This review develops three main sections: (1) a brief explanation of central concepts, such as big data, machine learning, and artificial intelligence, hopefully not too simplistic but readable for larger audiences; (2) a discussion about ethical issues, such as bias, discrimination, and dilemmas in research; and (3) a brief description of how futures studies could address ethical dilemmas derived from different time horizons among machine learning immediate results, forecasting short-term predictions, and foresight long-term scenarios.

Author(s):  
S. Matthew Liao

This introduction outlines in section I.1 some of the key issues in the study of the ethics of artificial intelligence (AI) and proposes ways to take these discussions further. Section I.2 discusses key concepts in AI, machine learning, and deep learning. Section I.3 considers ethical issues that arise because current machine learning is data hungry; is vulnerable to bad data and bad algorithms; is a black box that has problems with interpretability, explainability, and trust; and lacks a moral sense. Section I.4 discusses ethical issues that arise because current machine learning systems may be working too well and human beings can be vulnerable in the presence of these intelligent systems. Section I.5 examines ethical issues arising out of the long-term impact of superintelligence such as how the values of a superintelligent AI can be aligned with human values. Section I.6 presents an overview of the essays in this volume.


AI Magazine ◽  
2018 ◽  
Vol 39 (1) ◽  
pp. 75-83 ◽  
Author(s):  
Santiago Ontañón ◽  
Nicolas A. Barriga ◽  
Cleyton R. Silva ◽  
Rubens O. Moraes ◽  
Levi H. S. Lelis

This article presents the results of the first edition of the microRTS (μRTS) AI competition, which was hosted by the IEEE Computational Intelligence in Games (CIG) 2017 conference. The goal of the competition is to spur research on AI techniques for real-time strategy (RTS) games. In this first edition, the competition received three submissions, focusing on address- ing problems such as balancing long-term and short-term search, the use of machine learning to learn how to play against certain opponents, and finally, dealing with partial observability in RTS games.


2018 ◽  
Vol 7 (4.34) ◽  
pp. 384
Author(s):  
Muhamad Fazil Ahmad

This research examines what impact the Big Data Processing Framework (BDPF) has on Artificial Intelligence (AI) applications within Corporate Marketing Communication (CMC), and thereby the research question stated is: What is the potential impact of the BDPF on AI applications within the CMC tactical and managerial functions? To fulfill the purpose of this research, a qualitative research strategy was applied, including semi-structured interviews with experts within the different fields of examination: management, AI technology and CMC. The findings were analyzed through performing a thematic analysis, where coding was conducted in two steps. AI has many useful applications within CMC, which currently mainly are of the basic form of AI, so-called rule-based systems. However, the more complicated communication systems are used in some areas. Based on these findings, the impact of the BDPF on AI applications is assessed by examining different characteristics of the processing frameworks. The BDPF initially imposes both an administrative and compliance burden on organizations within this industry, and is particularly severe when machine learning is used. These burdens foremost stem from the general restriction of processing personal data and the data erasure requirement. However, in the long term, these burdens instead contribute to a positive impact on machine learning. The timeframe until enforcement contributes to a somewhat negative impact in the short term, which is also true for the uncertainty around interpretations of the BDPF requirements. Yet, the BDPF provides flexibility in how to become compliant, which is favorable for AI applications. Finally, BDPF compliance can increase company value, and thereby incentivize investments into AI models of higher transparency. The impact of the BDPF is quite insignificant for the basic forms of AI applications, which are currently most common within CMC. However, for the more complicated applications that are used, the BDPF is found to have a more severe negative impact in the short term, while it instead has a positive impact in the long term.   


Author(s):  
Bernd Carsten Stahl

AbstractA discussion of the ethics of artificial intelligence hinges on the definition of the term. In this chapter I propose three interrelated but distinct concepts of AI, which raise different types of ethical issues. The first concept of AI is that of machine learning, which is often seen as an example of “narrow” AI. The second concept is that of artificial general intelligence standing for the attempt to replicate human capabilities. Finally, I suggest that the term AI is often used to denote converging socio-technical systems. Each of these three concepts of AI has different properties and characteristics that give rise to different types of ethical concerns.


Philosophies ◽  
2021 ◽  
Vol 6 (1) ◽  
pp. 6
Author(s):  
Nadisha-Marie Aliman ◽  
Leon Kester ◽  
Roman Yampolskiy

In the last years, artificial intelligence (AI) safety gained international recognition in the light of heterogeneous safety-critical and ethical issues that risk overshadowing the broad beneficial impacts of AI. In this context, the implementation of AI observatory endeavors represents one key research direction. This paper motivates the need for an inherently transdisciplinary AI observatory approach integrating diverse retrospective and counterfactual views. We delineate aims and limitations while providing hands-on-advice utilizing concrete practical examples. Distinguishing between unintentionally and intentionally triggered AI risks with diverse socio-psycho-technological impacts, we exemplify a retrospective descriptive analysis followed by a retrospective counterfactual risk analysis. Building on these AI observatory tools, we present near-term transdisciplinary guidelines for AI safety. As further contribution, we discuss differentiated and tailored long-term directions through the lens of two disparate modern AI safety paradigms. For simplicity, we refer to these two different paradigms with the terms artificial stupidity (AS) and eternal creativity (EC) respectively. While both AS and EC acknowledge the need for a hybrid cognitive-affective approach to AI safety and overlap with regard to many short-term considerations, they differ fundamentally in the nature of multiple envisaged long-term solution patterns. By compiling relevant underlying contradistinctions, we aim to provide future-oriented incentives for constructive dialectics in practical and theoretical AI safety research.


2016 ◽  
Vol 25 (3) ◽  
pp. 554-556
Author(s):  
Jason Lesandrini ◽  
Carol O’Connell

Ethical issues in long-term care settings, although having received attention in the literature, have not in our opinion received the appropriate level they require. Thus, we applaud the Cambridge Quarterly for publishing this case. We can attest to the significance of ethical issues arising in long-term care facilities, as Mr. Hope’s case is all too familiar to those practicing in these settings. What is unique about this case is that an actual ethics consult was made in a long-term care setting. We have seen very little in the published literature on the use of ethics structures in long-term care populations. Our experience is that these healthcare settings are ripe for ethical concerns and that providers, patients, families, and staff need/desire ethics resources to actively and preventively address ethical concerns. The popular press has begun to recognize the ethical issues involved in long-term care settings and the need for ethics structures. Recently, in California a nurse refused to initiate CPR for an elderly patient in a senior residence. In that case, the nurse was quoted as saying that the facility had a policy that nurses were not to start CPR for elderly patients.1 Although this case is not exactly the same as that of Mr. Hope, it highlights the need for developing robust ethics program infrastructures in long-term care settings that work toward addressing ethical issues through policy, education, and active consultation.


2021 ◽  
Author(s):  
Yongmin Cho ◽  
Rachael A Jonas-Closs ◽  
Lev Y Yampolsky ◽  
Marc W Kirschner ◽  
Leonid Peshkin

We present a novel platform for testing the effect of interventions on life- and health-span of a short-lived semi transparent freshwater organism, sensitive to drugs with complex behavior and physiology - the planktonic crustacean Daphnia magna. Within this platform, dozens of complex behavioural features of both routine motion and response to stimuli are continuously accurately quantified for large homogeneous cohorts via an automated phenotyping pipeline. We build predictive machine learning models calibrated using chronological age and extrapolate onto phenotypic age. We further apply the model to estimate the phenotypic age under pharmacological perturbation. Our platform provides a scalable framework for drug screening and characterization in both life-long and instant assays as illustrated using long term dose response profile of metformin and short term assay of such well-studied substances as caffeine and alcohol.


2021 ◽  
Vol 2068 (1) ◽  
pp. 012042
Author(s):  
A Kolesnikov ◽  
P Kikin ◽  
E Panidi

Abstract The field of logistics and transport operates with large amounts of data. The transformation of such arrays into knowledge and processing using machine learning methods will help to find additional reserves for optimizing transport and logistics processes and supply chains. This article analyses the possibilities and prospects for the application of machine learning and geospatial knowledge in the field of logistics and transport using specific examples. The long-term impact of geospatial-based artificial intelligence systems on such processes as procurement, delivery, inventory management, maintenance, customer interaction is considered.


2021 ◽  
Vol 2021 ◽  
pp. 1-18
Author(s):  
Mahmuda Akhtar ◽  
Sara Moridpour

In recent years, traffic congestion prediction has led to a growing research area, especially of machine learning of artificial intelligence (AI). With the introduction of big data by stationary sensors or probe vehicle data and the development of new AI models in the last few decades, this research area has expanded extensively. Traffic congestion prediction, especially short-term traffic congestion prediction is made by evaluating different traffic parameters. Most of the researches focus on historical data in forecasting traffic congestion. However, a few articles made real-time traffic congestion prediction. This paper systematically summarises the existing research conducted by applying the various methodologies of AI, notably different machine learning models. The paper accumulates the models under respective branches of AI, and the strength and weaknesses of the models are summarised.


Sign in / Sign up

Export Citation Format

Share Document