The Oxford Handbook of Ethics of AI
Latest Publications


TOTAL DOCUMENTS

44
(FIVE YEARS 44)

H-INDEX

1
(FIVE YEARS 1)

Published By Oxford University Press

9780190067397

Author(s):  
Timnit Gebru

This chapter discusses the role of race and gender in artificial intelligence (AI). The rapid permeation of AI into society has not been accompanied by a thorough investigation of the sociopolitical issues that cause certain groups of people to be harmed rather than advantaged by it. For instance, recent studies have shown that commercial automated facial analysis systems have much higher error rates for dark-skinned women, while having minimal errors on light-skinned men. Moreover, a 2016 ProPublica investigation uncovered that machine learning–based tools that assess crime recidivism rates in the United States are biased against African Americans. Other studies show that natural language–processing tools trained on news articles exhibit societal biases. While many technical solutions have been proposed to alleviate bias in machine learning systems, a holistic and multifaceted approach must be taken. This includes standardization bodies determining what types of systems can be used in which scenarios, making sure that automated decision tools are created by people from diverse backgrounds, and understanding the historical and political factors that disadvantage certain groups who are subjected to these tools.


Author(s):  
Joshua A. Kroll

This chapter addresses the relationship between AI systems and the concept of accountability. To understand accountability in the context of AI systems, one must begin by examining the various ways the term is used and the variety of concepts to which it is meant to refer. Accountability is often associated with transparency, the principle that systems and processes should be accessible to those affected through an understanding of their structure or function. For a computer system, this often means disclosure about the system’s existence, nature, and scope; scrutiny of its underlying data and reasoning approaches; and connection of the operative rules implemented by the system to the governing norms of its context. Transparency is a useful tool in the governance of computer systems, but only insofar as it serves accountability. There are other mechanisms available for building computer systems that support accountability of their creators and operators. Ultimately, accountability requires establishing answerability relationships that serve the interests of those affected by AI systems.


Author(s):  
Elana Zeide

This chapter looks at the use of artificial intelligence (AI) in education, which immediately conjures the fantasy of robot teachers, as well as fears that robot teachers will replace their human counterparts. However, AI tools impact much more than instructional choices. Personalized learning systems take on a whole host of other educational roles as well, fundamentally reconfiguring education in the process. They not only perform the functions of robot teachers but also make pedagogical and policy decisions typically left to teachers and policymakers. Their design, affordances, analytical methods, and visualization dashboards construct a technological, computational, and statistical infrastructure that literally codifies what students learn, how they are assessed, and what standards they must meet. However, school procurement and implementation of these systems are rarely part of public discussion. If they are to remain relevant to the educational process itself, as opposed to just its packaging and context, schools and their stakeholders must be more proactive in demanding information from technology providers and setting internal protocols to ensure effective and consistent implementation. Those who choose to outsource instructional functions should do so with sufficient transparency mechanisms in place to ensure professional oversight guided by well-informed debate.


Author(s):  
Petra Molnar

This chapter focuses on how technologies used in the management of migration—such as automated decision-making in immigration and refugee applications and artificial intelligence (AI) lie detectors—impinge on human rights with little international regulation, arguing that this lack of regulation is deliberate, as states single out the migrant population as a viable testing ground for new technologies. Making migrants more trackable and intelligible justifies the use of more technology and data collection under the guide of national security, or even under tropes of humanitarianism and development. Technology is not inherently democratic, and human rights impacts are particularly important to consider in humanitarian and forced migration contexts. An international human rights law framework is particularly useful for codifying and recognizing potential harms, because technology and its development are inherently global and transnational. Ultimately, more oversight and issue specific accountability mechanisms are needed to safeguard fundamental rights of migrants, such as freedom from discrimination, privacy rights, and procedural justice safeguards, such as the right to a fair decision maker and the rights of appeal.


Author(s):  
Chelsea Barabas

This chapter discusses contemporary debates regarding the use of artificial intelligence as a vehicle for criminal justice reform. It closely examines two general approaches to what has been widely branded as “algorithmic fairness” in criminal law: the development of formal fairness criteria and accuracy measures that illustrate the trade-offs of different algorithmic interventions; and the development of “best practices” and managerialist standards for maintaining a baseline of accuracy, transparency, and validity in these systems. Attempts to render AI-branded tools more accurate by addressing narrow notions of bias miss the deeper methodological and epistemological issues regarding the fairness of these tools. The key question is whether predictive tools reflect and reinforce punitive practices that drive disparate outcomes, and how data regimes interact with the penal ideology to naturalize these practices. The chapter then calls for a radically different understanding of the role and function of the carceral state, as a starting place for re-imagining the role of “AI” as a transformative force in the criminal legal system.


Author(s):  
Alessandro Blasimme ◽  
Effy Vayena

This chapter explores ethical issues raised by the use of artificial intelligence (AI) in the domain of biomedical research, healthcare provision, and public health. The litany of ethical challenges that AI in medicine raises cannot be addressed sufficiently by current regulatory and ethical frameworks. The chapter then advances the systemic oversight approach as a governance blueprint, which is based on six principles offering guidance as to the desirable features of oversight structures and processes in the domain of data-intense biomedicine: adaptivity, flexibility, inclusiveness, reflexivity, responsiveness, and monitoring (AFIRRM). In the research domain, ethical review committees will have to incorporate reflexive assessment of the scientific and social merits of AI-driven research and, as a consequence, will have to open their ranks to new professional figures such as social scientists. In the domain of patient care, clinical validation is a crucial issue. Hospitals could equip themselves with “clinical AI oversight bodies” charged with the task of advising clinical administrators. Meanwhile, in the public health sphere, the new level of granularity enabled by AI in disease surveillance or health promotion will have to be negotiated at the level of targeted communities.


Author(s):  
Bryant Walker Smith

This chapter highlights key ethical issues in the use of artificial intelligence in transport by using automated driving as an example. These issues include the tension between technological solutions and policy solutions; the consequences of safety expectations; the complex choice between human authority and computer authority; and power dynamics among individuals, governments, and companies. In 2017 and 2018, the U.S. Congress considered automated driving legislation that was generally supported by many of the larger automated-driving developers. However, this automated-driving legislation failed to pass because of a lack of trust in technologies and institutions. Trustworthiness is much more of an ethical question. Automated vehicles will not be driven by individuals or even by computers; they will be driven by companies acting through their human and machine agents. An essential issue for this field—and for artificial intelligence generally—is how the companies that develop and deploy these technologies should earn people’s trust.


Author(s):  
Andrea Renda

This chapter assesses Europe’s efforts in developing a full-fledged strategy on the human and ethical implications of artificial intelligence (AI). The strong focus on ethics in the European Union’s AI strategy should be seen in the context of an overall strategy that aims at protecting citizens and civil society from abuses of digital technology but also as part of a competitiveness-oriented strategy aimed at raising the standards for access to Europe’s wealthy Single Market. In this context, one of the most peculiar steps in the European Union’s strategy was the creation of an independent High-Level Expert Group on AI (AI HLEG), accompanied by the launch of an AI Alliance, which quickly attracted several hundred participants. The AI HLEG, a multistakeholder group including fifty-two experts, was tasked with the definition of Ethics Guidelines as well as with the formulation of “Policy and Investment Recommendations.” With the advice of the AI HLEG, the European Commission put forward ethical guidelines for Trustworthy AI—which are now paving the way for a comprehensive, risk-based policy framework.


Author(s):  
Nagla Rizk

This chapter looks at the challenges, opportunities, and tensions facing the equitable development of artificial intelligence (AI) in the MENA region in the aftermath of the Arab Spring. While diverse in their natural and human resource endowments, countries of the region share a commonality in the predominance of a youthful population amid complex political and economic contexts. Rampant unemployment—especially among a growing young population—together with informality, gender, and digital inequalities, will likely shape the impact of AI technologies, especially in the region’s labor-abundant resource-poor countries. The chapter then analyzes issues related to data, legislative environment, infrastructure, and human resources as key inputs to AI technologies which in their current state may exacerbate existing inequalities. Ultimately, the promise for AI technologies for inclusion and helping mitigate inequalities lies in harnessing grounds-up youth entrepreneurship and innovation initiatives driven by data and AI, with a few hopeful signs coming from national policies.


Author(s):  
Chinmayi Arun

This chapter details how AI affects, and will continue to affect, the Global South. The term “South” has a history connected with the “Third World” and has referred to countries that share postcolonial history and certain development goals. However, scholars have expanded and refined on it to include different kinds of marginal, disenfranchised populations such that the South is now a plural concept—there are Souths. The AI-related risks for Southern populations include concerns of discrimination, bias, oppression, exclusion, and bad design. These can be exacerbated in the context of vulnerable populations, especially those without access to human rights law or institutional remedies. The chapter then outlines these risks as well as the international human rights law that is applicable. It argues that a human rights–centric, inclusive, empowering context-driven approach is necessary.


Sign in / Sign up

Export Citation Format

Share Document