Philosophy & Technology
Latest Publications


TOTAL DOCUMENTS

477
(FIVE YEARS 147)

H-INDEX

24
(FIVE YEARS 5)

Published By Springer-Verlag

2210-5441, 2210-5433

Author(s):  
Katia Schwerzmann

AbstractIn this article, I show why it is necessary to abolish the use of predictive algorithms in the US criminal justice system at sentencing. After presenting the functioning of these algorithms in their context of emergence, I offer three arguments to demonstrate why their abolition is imperative. First, I show that sentencing based on predictive algorithms induces a process of rewriting the temporality of the judged individual, flattening their life into a present inescapably doomed by its past. Second, I demonstrate that recursive processes, comprising predictive algorithms and the decisions based on their predictions, systematically suppress outliers and progressively transform reality to match predictions. In my third and final argument, I show that decisions made on the basis of predictive algorithms actively perform a biopolitical understanding of justice as management and modulation of risks. In such a framework, justice becomes a means to maintain a perverse social homeostasis that systematically exposes disenfranchised Black and Brown populations to risk.


Author(s):  
Siri Beerends ◽  
Ciano Aydin

AbstractEssentialists understand authenticity as an inherent quality of a person, object, artifact, or place, whereas constructionists consider authenticity as a social creation without any pre-given essence, factuality, or reality. In this paper, we move beyond the essentialist-constructionist dichotomy. Rather than focusing on the question whether authenticity can be found or needs to be constructed, we hook into the idea that authenticity is an interactive, culturally informed process of negotiation. In addition to essentialist and constructionist approaches, we discuss a third, less well-known approach that cannot be reduced to any of the two forms. This approach celebrates the authenticity of inauthenticity by amplifying the made. We argue that the value of (in)authenticity lies not in choosing for one of these approaches, but in the degree to which the process of negotiating authenticity enables a critical formation of selves and societies. Authenticity is often invoked as a method of social control or a mark of power relations: once something is defined as authentic, it is no longer questioned. Emerging technologies—especially data-driven technologies—have the capacity to conceal these power relations, propel a shift in power, and dominate authentication processes. This raises the question how processes of authentication can contribute to a critical formation of selves and societies, against the backdrop of emerging technologies. We argue in favor of an interactionist approach of authenticity and discuss the importance of creating space in authentication processes that are increasingly influenced by technology as an invisible actor.


Author(s):  
Cristiano Codeiro Cruz

AbstractThe decolonial theory understands that Western Modernity keeps imposing itself through a triple mutually reinforcing and shaping imprisonment: coloniality of power, coloniality of knowledge, and coloniality of being. Technical design has an essential role in either maintaining or overcoming coloniality. In this article, two main approaches to decolonizing the technical design are presented. First is Yuk Hui’s and Ahmed Ansari’s proposals that, revisiting or recovering the different histories and philosophies of technology produced by humankind, intend to decolonize the minds of philosophers and engineers/architects/designers as a pre-condition for such decolonial designs to take place. I call them top-down approaches. Second is some technical design initiatives that, being developed alongside marginalized/subalternate people, intend to co-construct decolonial sociotechnical solutions through a committed, decolonizing, and careful dialog of knowledge. I call them bottom-up approaches. Once that is done, the article’s second half derives ontological, epistemological, and political consequences from the conjugation of top-down and bottom-up approaches. Such consequences challenge some established or not yet entirely overcome understandings in the philosophy of technology (PT) and, in so doing, are meant to represent some steps in PT’s decolonization. Even though both top-down and bottom-up approaches are considered, the article’s main contributions are associated with (bottom-up) decolonial technical design practices, whose methodologies and outcomes are important study cases for PT and whose practitioners (i.e., decolonial designers) can be taken as inspiring examples for philosophers who want to decolonize/enlarge PT or make it decolonial (that is, a way of fostering decoloniality).


Author(s):  
Ulrik Franke

AbstractModern society makes extensive use of automated algorithmic decisions, fueled by advances in artificial intelligence. However, since these systems are not perfect, questions about fairness are increasingly investigated in the literature. In particular, many authors take a Rawlsian approach to algorithmic fairness. This article aims to identify some complications with this approach: Under which circumstances can Rawls’s original position reasonably be applied to algorithmic fairness decisions? First, it is argued that there are important differences between Rawls’s original position and a parallel algorithmic fairness original position with respect to risk attitudes. Second, it is argued that the application of Rawls’s original position to algorithmic fairness faces a boundary problem in defining relevant stakeholders. Third, it is observed that the definition of the least advantaged, necessary for applying the difference principle, requires some attention in the context of algorithmic fairness. Finally, it is argued that appropriate deliberation in algorithmic fairness contexts often require more knowledge about probabilities than the Rawlsian original position allows. Provided that these complications are duly considered, the thought-experiment of the Rawlsian original position can be useful in algorithmic fairness decisions.


Author(s):  
Emanuele Ratti ◽  
Mark Graves

AbstractIn the past few years, the ethical ramifications of AI technologies (in particular data science) have been at the center of intense debates. Considerable attention has been devoted to understanding how a morally responsible practice of data science can be promoted and which values have to shape it. In this context, ethics and moral responsibility have been mainly conceptualized as compliance to widely shared principles. However, several scholars have highlighted the limitations of such a principled approach. Drawing from microethics and the virtue theory tradition, in this paper, we formulate a different approach to ethics in data science which is based on a different conception of “being ethical” and, ultimately, of what it means to promote a morally responsible data science. First, we develop the idea that, rather than only compliance, ethical decision-making consists in using certain moral abilities (e.g., virtues), which are cultivated by practicing and exercising them in the data science process. An aspect of virtue development that we discuss here is moral attention, which is the ability of data scientists to identify the ethical relevance of their own technical decisions in data science activities. Next, by elaborating on the capability approach, we define a technical act as ethically relevant when it impacts one or more of the basic human capabilities of data subjects. Therefore, rather than “applying ethics” (which can be mindless), data scientists should cultivate ethics as a form of reflection on how technical choices and ethical impacts shape one another. Finally, we show how this microethical framework concretely works, by dissecting the ethical dimension of the technical procedures involved in data understanding and preparation of electronic health records.


Author(s):  
Mariarosaria Taddeo ◽  
David McNeish ◽  
Alexander Blanchard ◽  
Elizabeth Edgar

AbstractDefence agencies across the globe identify artificial intelligence (AI) as a key technology to maintain an edge over adversaries. As a result, efforts to develop or acquire AI capabilities for defence are growing on a global scale. Unfortunately, they remain unmatched by efforts to define ethical frameworks to guide the use of AI in the defence domain. This article provides one such framework. It identifies five principles—justified and overridable uses, just and transparent systems and processes, human moral responsibility, meaningful human control and reliable AI systems—and related recommendations to foster ethically sound uses of AI for national defence purposes.


Author(s):  
Andrew McStay

Abstract This paper assesses leading Japanese philosophical thought since the onset of Japan’s modernity: namely, from the Meiji Restoration (1868) onwards. It argues that there are lessons of global value for AI ethics to be found from examining leading Japanese philosophers of modernity and ethics (Yukichi Fukuzawa, Nishida Kitaro, Nishi Amane, and Watsuji Tetsurō), each of whom engaged closely with Western philosophical traditions. Turning to these philosophers allows us to advance from what are broadly individualistically and Western-oriented ethical debates regarding emergent technologies that function in relation to AI, by introducing notions of community, wholeness, sincerity, and heart. With reference to AI that pertains to profile, judge, learn, and interact with human emotion (emotional AI), this paper contends that (a) Japan itself may internally make better use of historic indigenous ethical thought, especially as it applies to question of data and relationships with technology; but also (b) that externally Western and global ethical discussion regarding emerging technologies will find valuable insights from Japan. The paper concludes by distilling from Japanese philosophers of modernity four ethical suggestions, or spices, in relation to emerging technological contexts for Japan’s national AI policies and international fora, such as standards development and global AI ethics policymaking.


Sign in / Sign up

Export Citation Format

Share Document