scholarly journals Framework for Assessing Ethical Aspects of Algorithms and Their Encompassing Socio-Technical System

2021 ◽  
Vol 11 (23) ◽  
pp. 11187
Author(s):  
Xadya van Bruxvoort ◽  
Maurice van Keulen

In the transition to a data-driven society, organizations have introduced data-driven algorithms that often apply artificial intelligence. In this research, an ethical framework was developed to ensure robustness and completeness and to avoid and mitigate potential public uproar. We take a socio-technical perspective, i.e., view the algorithm embedded in an organization with infrastructure, rules, and procedures as one to-be-designed system. The framework consists of five ethical principles: beneficence, non-maleficence, autonomy, justice, and explicability. It can be used during the design for identification of relevant concerns. The framework has been validated by applying it to real-world fraud detection cases: Systeem Risico Indicatie (SyRI) of the Dutch government and the algorithm of the municipality of Amersfoort. The former is a controversial country-wide algorithm that was ultimately prohibited by court. The latter is an algorithm in development. In both cases, it proved effective in identifying all ethical risks. For SyRI, all concerns found in the media were also identified by the framework, mainly focused on transparency of the entire socio-technical system. For the municipality of Amersfoort, the framework highlighted risks regarding the amount of sensitive data and communication to and with the public, presenting a more thorough overview compared to the risks the media raised.

Information ◽  
2021 ◽  
Vol 12 (7) ◽  
pp. 275
Author(s):  
Peter Cihon ◽  
Jonas Schuett ◽  
Seth D. Baum

Corporations play a major role in artificial intelligence (AI) research, development, and deployment, with profound consequences for society. This paper surveys opportunities to improve how corporations govern their AI activities so as to better advance the public interest. The paper focuses on the roles of and opportunities for a wide range of actors inside the corporation—managers, workers, and investors—and outside the corporation—corporate partners and competitors, industry consortia, nonprofit organizations, the public, the media, and governments. Whereas prior work on multistakeholder AI governance has proposed dedicated institutions to bring together diverse actors and stakeholders, this paper explores the opportunities they have even in the absence of dedicated multistakeholder institutions. The paper illustrates these opportunities with many cases, including the participation of Google in the U.S. Department of Defense Project Maven; the publication of potentially harmful AI research by OpenAI, with input from the Partnership on AI; and the sale of facial recognition technology to law enforcement by corporations including Amazon, IBM, and Microsoft. These and other cases demonstrate the wide range of mechanisms to advance AI corporate governance in the public interest, especially when diverse actors work together.


2021 ◽  
pp. 026638212110619
Author(s):  
Sharon Richardson

During the past two decades, there have been a number of breakthroughs in the fields of data science and artificial intelligence, made possible by advanced machine learning algorithms trained through access to massive volumes of data. However, their adoption and use in real-world applications remains a challenge. This paper posits that a key limitation in making AI applicable has been a failure to modernise the theoretical frameworks needed to evaluate and adopt outcomes. Such a need was anticipated with the arrival of the digital computer in the 1950s but has remained unrealised. This paper reviews how the field of data science emerged and led to rapid breakthroughs in algorithms underpinning research into artificial intelligence. It then discusses the contextual framework now needed to advance the use of AI in real-world decisions that impact human lives and livelihoods.


2021 ◽  
pp. medethics-2020-107095
Author(s):  
Charalampia (Xaroula) Kerasidou ◽  
Angeliki Kerasidou ◽  
Monika Buscher ◽  
Stephen Wilkinson

Artificial intelligence (AI) is changing healthcare and the practice of medicine as data-driven science and machine-learning technologies, in particular, are contributing to a variety of medical and clinical tasks. Such advancements have also raised many questions, especially about public trust. As a response to these concerns there has been a concentrated effort from public bodies, policy-makers and technology companies leading the way in AI to address what is identified as a "public trust deficit". This paper argues that a focus on trust as the basis upon which a relationship between this new technology and the public is built is, at best, ineffective, at worst, inappropriate or even dangerous, as it diverts attention from what is actually needed to actively warrant trust. Instead of agonising about how to facilitate trust, a type of relationship which can leave those trusting vulnerable and exposed, we argue that efforts should be focused on the difficult and dynamic process of ensuring reliance underwritten by strong legal and regulatory frameworks. From there, trust could emerge but not merely as a means to an end. Instead, as something to work in practice towards; that is, the deserved result of an ongoing ethical relationship where there is the appropriate, enforceable and reliable regulatory infrastructure in place for problems, challenges and power asymmetries to be continuously accounted for and appropriately redressed.


2020 ◽  
pp. 096366252096549
Author(s):  
Gabrielle Samuel ◽  
Heilien Diedericks ◽  
Gemma Derrick

This article reports how 18 UK and Canadian population health artificial intelligence researchers in Higher Education Institutions perceive the use of artificial intelligence systems in their research, and how this compares with their perceptions about the media portrayal of artificial intelligence systems. This is triangulated with a small scoping analysis of how UK and Canadian news articles portray artificial intelligence systems associated with health research and care. Interviewees had concerns about what they perceived as sensationalist reporting of artificial intelligence systems – a finding reflected in the media analysis. In line with Pickersgill’s concept of ‘epistemic modesty’, they considered artificial intelligence systems better perceived as non-exceptionalist methodological tools that were uncertain and unexciting. Adopting ‘epistemic modesty’ was sometimes hindered by stakeholders to whom the research is disseminated, who may be less interested in hearing about the uncertainties of scientific practice, having implications on both research and policy.


1996 ◽  
Vol 73 (1) ◽  
pp. 181-194 ◽  
Author(s):  
Thomas J. Johnson ◽  
Wayne Wanta ◽  
Timothy Boudreau ◽  
Janet Blank-Libra ◽  
Killian Schaffer ◽  
...  

This agenda-building study employed a path analysis model to examine the three-way relationship among the public, the media, and the president on the issue of drug abuse during the Nixon administration. The path model also measured the extent to which these actors were influenced by real-world conditions about the number of drug-related arrests in the United States. Past studies have suggested a cyclical relationship should exist among the president, the press, and the public. This study, however, found a linear relationship with issues moving first, from real world to the media and the public, then from the media to the public, and finally from the public to the president.


2020 ◽  
Vol 33 (2) ◽  
pp. 183-200 ◽  
Author(s):  
Merlin Stone ◽  
Eleni Aravopoulou ◽  
Yuksel Ekinci ◽  
Geraint Evans ◽  
Matt Hobbs ◽  
...  

Purpose The purpose of this paper is to review literature about the applications of artificial intelligence (AI) in strategic situations and identify the research that is needed in the area of applying AI to strategic marketing decisions. Design/methodology/approach The approach was to carry out a literature review and to consult with marketing experts who were invited to contribute to the paper. Findings There is little research into applying AI to strategic marketing decision-making. This research is needed, as the frontier of AI application to decision-making is moving in many management areas from operational to strategic. Given the competitive nature of such decisions and the insights from applying AI to defence and similar areas, it is time to focus on applying AI to strategic marketing decisions. Research limitations/implications The application of AI to strategic marketing decision-making is known to be taking place, but as it is commercially sensitive, data is not available to the authors. Practical implications There are strong implications for all businesses, particularly large businesses in competitive industries, where failure to deploy AI in the face of competition from firms, who have deployed AI to improve their decision-making could be dangerous. Social implications The public sector is a very important marketing decision maker. Although in most cases it does not operate competitively, it must make decisions about making different services available to different citizens and identify the risks of not providing services to certain citizens; so, this paper is relevant to the public sector. Originality/value To the best of the authors’ knowledge, this is one of the first papers to probe deployment of AI in strategic marketing decision-making.


Author(s):  
Seong Ho Park ◽  
Kyung-Hyun Do ◽  
Sungwon Kim ◽  
Joo Hyun Park ◽  
Young-Suk Lim

Artificial intelligence (AI) is expected to affect various fields of medicine substantially and has the potential to improve many aspects of healthcare. However, AI has been creating much hype, too. In applying AI technology to patients, medical professionals should be able to resolve any anxiety, confusion, and questions that patients and the public may have. Also, they are responsible for ensuring that AI becomes a technology beneficial for patient care. These make the acquisition of sound knowledge and experience about AI a task of high importance for medical students. Preparing for AI does not merely mean learning information technology such as computer programming. One should acquire sufficient knowledge of basic and clinical medicines, data science, biostatistics, and evidence-based medicine. As a medical student, one should not passively accept stories related to AI in medicine in the media and on the Internet. Medical students should try to develop abilities to distinguish correct information from hype and spin and even capabilities to create thoroughly validated, trustworthy information for patients and the public.


Author(s):  
Norman G. Vinson ◽  
Heather Molyneaux ◽  
Joel D. Martin

The opacity of AI systems' decision making has led to calls to modify these systems so they can provide explanations for their decisions. This chapter contains a discussion of what these explanations should address and what their nature should be to meet the concerns that have been raised and to prove satisfactory to users. More specifically, the chapter briefly reviews the typical forms of AI decision-making that are currently used to make real-world decisions affecting people's lives. Based on concerns about AI decision making expressed in the literature and the media, the chapter follows with principles that the systems should respect and corresponding requirements for explanations to respect those principles. A mapping between those explanation requirements and the types of explanations generated by AI decision making systems reveals the strengths and shortcomings of the explanations generated by those systems.


Discourse ◽  
2021 ◽  
Vol 7 (4) ◽  
pp. 58-67
Author(s):  
A. Yu. Kolianov

Introduction. The article examines the image of artificial intelligence in the media and its reflection in 2010s. Coverage of the fast development of technology in the mass media requires careful analysis and systematic monitoring due to not fully determined socio-ethical ideas about the place and role of artificial intelligence in human life. The paper attempts to study what media had about artificial intelligence in the second decade of the 21st century.Methodology and sources. Based on the results of quantitative and qualitative studies of the texts of Russian and foreign media, semantic changes in the representation of artificial intelligence are analyzed. To collect empirical informa tion, we used the analysis of documents (reports and preparatory notes of UNESCO for the development of an ethical code of artificial intelligence), public opinion polls, content analysis of Russian and foreign media.Results and discussion. According to the results of the study, correlation between intensity of references to artificial intelligence on political and economic phenomena was noted. In particular, there is a connection with the growth of economic activity of investors in advanced technologies, the launch of innovative technologies in the sphere of consumption by large companies and the strategic programs of states.Conclusion. At the moment, artificial intelligence is seen as positive technology. Implementation of AI into social and professional spheres is irreversible. The negative consequences of the development of AI are considering as an unobvious hypothetical future. By the beginning of the third decade of the 21st century, the media discourse around AI expanded to such a state of uncertainty that it took action to establish an ethical framework for the development of technology.


2021 ◽  
pp. 026101832098546
Author(s):  
Alexandra James ◽  
Andrew Whelan

In recent years, a discourse of ‘ethical artificial intelligence’ has emerged and gained international traction in response to widely publicised AI failures. In Australia, the discourse around ethical AI does not accord with the reality of AI deployment in the public sector. Drawing on institutional ethnographic approaches, this paper describes the misalignments between how technology is described in government documentation, and how it is deployed in social service delivery. We argue that the propagation of ethical principles legitimates established new public management strategies, and pre-empts questions regarding the efficacy of AI development; instead positioning implementation as inevitable and, provided an ethical framework is adopted, laudable. The ethical AI discourse acknowledges, and ostensibly seeks to move past, widely reported administrative failures involving new technologies. In actuality, this discourse works to make AI implementation a reality, ethical or not.


Sign in / Sign up

Export Citation Format

Share Document