scholarly journals Fairness, accountability and transparency: notes on algorithmic decision-making in criminal justice

2019 ◽  
Vol 15 (2) ◽  
pp. 126-139 ◽  
Author(s):  
Vincent Chiao

AbstractOver the last few years, legal scholars, policy-makers, activists and others have generated a vast and rapidly expanding literature concerning the ethical ramifications of using artificial intelligence, machine learning, big data and predictive software in criminal justice contexts. These concerns can be clustered under the headings of fairness, accountability and transparency. First, can we trust technology to be fair, especially given that the data on which the technology is based are biased in various ways? Second, whom can we blame if the technology goes wrong, as it inevitably will on occasion? Finally, does it matter if we do not know how an algorithm works or, relatedly, cannot understand how it reached its decision? I argue that, while these are serious concerns, they are not irresolvable. More importantly, the very same concerns of fairness, accountability and transparency apply, with even greater urgency, to existing modes of decision-making in criminal justice. The question, hence, is comparative: can algorithmic modes of decision-making improve upon the status quo in criminal justice? There is unlikely to be a categorical answer to this question, although there are some reasons for cautious optimism.

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Pooya Tabesh

Purpose While it is evident that the introduction of machine learning and the availability of big data have revolutionized various organizational operations and processes, existing academic and practitioner research within decision process literature has mostly ignored the nuances of these influences on human decision-making. Building on existing research in this area, this paper aims to define these concepts from a decision-making perspective and elaborates on the influences of these emerging technologies on human analytical and intuitive decision-making processes. Design/methodology/approach The authors first provide a holistic understanding of important drivers of digital transformation. The authors then conceptualize the impact that analytics tools built on artificial intelligence (AI) and big data have on intuitive and analytical human decision processes in organizations. Findings The authors discuss similarities and differences between machine learning and two human decision processes, namely, analysis and intuition. While it is difficult to jump to any conclusions about the future of machine learning, human decision-makers seem to continue to monopolize the majority of intuitive decision tasks, which will help them keep the upper hand (vis-à-vis machines), at least in the near future. Research limitations/implications The work contributes to research on rational (analytical) and intuitive processes of decision-making at the individual, group and organization levels by theorizing about the way these processes are influenced by advanced AI algorithms such as machine learning. Practical implications Decisions are building blocks of organizational success. Therefore, a better understanding of the way human decision processes can be impacted by advanced technologies will prepare managers to better use these technologies and make better decisions. By clarifying the boundaries/overlaps among concepts such as AI, machine learning and big data, the authors contribute to their successful adoption by business practitioners. Social implications The work suggests that human decision-makers will not be replaced by machines if they continue to invest in what they do best: critical thinking, intuitive analysis and creative problem-solving. Originality/value The work elaborates on important drivers of digital transformation from a decision-making perspective and discusses their practical implications for managers.


2020 ◽  
pp. 002224292095734
Author(s):  
Chiara Longoni ◽  
Luca Cian

Rapid development and adoption of AI, machine learning, and natural language processing applications challenge managers and policy makers to harness these transformative technologies. In this context, the authors provide evidence of a novel “word-of-machine” effect, the phenomenon by which utilitarian/hedonic attribute trade-offs determine preference for, or resistance to, AI-based recommendations compared with traditional word of mouth, or human-based recommendations. The word-of-machine effect stems from a lay belief that AI recommenders are more competent than human recommenders in the utilitarian realm and less competent than human recommenders in the hedonic realm. As a consequence, importance or salience of utilitarian attributes determine preference for AI recommenders over human ones, and importance or salience of hedonic attributes determine resistance to AI recommenders over human ones (Studies 1–4). The word-of machine effect is robust to attribute complexity, number of options considered, and transaction costs. The word-of-machine effect reverses for utilitarian goals if a recommendation needs matching to a person’s unique preferences (Study 5) and is eliminated in the case of human–AI hybrid decision making (i.e., augmented rather than artificial intelligence; Study 6). An intervention based on the consider-the-opposite protocol attenuates the word-of-machine effect (Studies 7a–b).


2021 ◽  
Vol 27 (4) ◽  
pp. 146045822110523
Author(s):  
Nicholas RJ Möllmann ◽  
Milad Mirbabaie ◽  
Stefan Stieglitz

The application of artificial intelligence (AI) not only yields in advantages for healthcare but raises several ethical questions. Extant research on ethical considerations of AI in digital health is quite sparse and a holistic overview is lacking. A systematic literature review searching across 853 peer-reviewed journals and conferences yielded in 50 relevant articles categorized in five major ethical principles: beneficence, non-maleficence, autonomy, justice, and explicability. The ethical landscape of AI in digital health is portrayed including a snapshot guiding future development. The status quo highlights potential areas with little empirical but required research. Less explored areas with remaining ethical questions are validated and guide scholars’ efforts by outlining an overview of addressed ethical principles and intensity of studies including correlations. Practitioners understand novel questions AI raises eventually leading to properly regulated implementations and further comprehend that society is on its way from supporting technologies to autonomous decision-making systems.


2019 ◽  
Vol 45 (8) ◽  
pp. 556-558 ◽  
Author(s):  
Ezio Di Nucci

I analyse an argument according to which medical artificial intelligence (AI) represents a threat to patient autonomy—recently put forward by Rosalind McDougall in the Journal of Medical Ethics. The argument takes the case of IBM Watson for Oncology to argue that such technologies risk disregarding the individual values and wishes of patients. I find three problems with this argument: (1) it confuses AI with machine learning; (2) it misses machine learning’s potential for personalised medicine through big data; (3) it fails to distinguish between evidence-based advice and decision-making within healthcare. I conclude that how much and which tasks we should delegate to machine learning and other technologies within healthcare and beyond is indeed a crucial question of our time, but in order to answer it, we must be careful in analysing and properly distinguish between the different systems and different delegated tasks.


2020 ◽  
Vol 114 ◽  
pp. 242-245
Author(s):  
Jootaek Lee

The term, Artificial Intelligence (AI), has changed since it was first coined by John MacCarthy in 1956. AI, believed to have been created with Kurt Gödel's unprovable computational statements in 1931, is now called deep learning or machine learning. AI is defined as a computer machine with the ability to make predictions about the future and solve complex tasks, using algorithms. The AI algorithms are enhanced and become effective with big data capturing the present and the past while still necessarily reflecting human biases into models and equations. AI is also capable of making choices like humans, mirroring human reasoning. AI can help robots to efficiently repeat the same labor intensive procedures in factories and can analyze historic and present data efficiently through deep learning, natural language processing, and anomaly detection. Thus, AI covers a spectrum of augmented intelligence relating to prediction, autonomous intelligence relating to decision making, automated intelligence for labor robots, and assisted intelligence for data analysis.


Author(s):  
Florian A. Huber ◽  
Roman Guggenberger

AbstractRecent investigations have focused on the clinical application of artificial intelligence (AI) for tasks specifically addressing the musculoskeletal imaging routine. Several AI applications have been dedicated to optimizing the radiology value chain in spine imaging, independent from modality or specific application. This review aims to summarize the status quo and future perspective regarding utilization of AI for spine imaging. First, the basics of AI concepts are clarified. Second, the different tasks and use cases for AI applications in spine imaging are discussed and illustrated by examples. Finally, the authors of this review present their personal perception of AI in daily imaging and discuss future chances and challenges that come along with AI-based solutions.


AI & Society ◽  
2021 ◽  
Author(s):  
Milad Mirbabaie ◽  
Lennart Hofeditz ◽  
Nicholas R. J. Frick ◽  
Stefan Stieglitz

AbstractThe application of artificial intelligence (AI) in hospitals yields many advantages but also confronts healthcare with ethical questions and challenges. While various disciplines have conducted specific research on the ethical considerations of AI in hospitals, the literature still requires a holistic overview. By conducting a systematic discourse approach highlighted by expert interviews with healthcare specialists, we identified the status quo of interdisciplinary research in academia on ethical considerations and dimensions of AI in hospitals. We found 15 fundamental manuscripts by constructing a citation network for the ethical discourse, and we extracted actionable principles and their relationships. We provide an agenda to guide academia, framed under the principles of biomedical ethics. We provide an understanding of the current ethical discourse of AI in clinical environments, identify where further research is pressingly needed, and discuss additional research questions that should be addressed. We also guide practitioners to acknowledge AI-related benefits in hospitals and to understand the related ethical concerns.


Proceedings ◽  
2021 ◽  
Vol 74 (1) ◽  
pp. 24
Author(s):  
Eduard Alexandru Stoica ◽  
Daria Maria Sitea

Nowadays society is profoundly changed by technology, velocity and productivity. While individuals are not yet prepared for holographic connection with banks or financial institutions, other innovative technologies have been adopted. Lately, a new world has been launched, personalized and adapted to reality. It has emerged and started to govern almost all daily activities due to the five key elements that are foundations of the technology: machine to machine (M2M), internet of things (IoT), big data, machine learning and artificial intelligence (AI). Competitive innovations are now on the market, helping with the connection between investors and borrowers—notably crowdfunding and peer-to-peer lending. Blockchain technology is now enjoying great popularity. Thus, a great part of the focus of this research paper is on Elrond. The outcomes highlight the relevance of technology in digital finance.


2021 ◽  
Vol 51 (1) ◽  
pp. 139-164
Author(s):  
Jakob Raffn ◽  
Frederik Lassen

Here we introduce the board game Politics of Nature, or PoN as it is now known. Inspired by the work of Bruno Latour, PoN offers an alternative take on co-existence by implementing a flat political ontology in a gamified meeting protocol. PoN does not suggest that humans have no special abilities, only that humans at the outset, are bestowed with no more rights than other kinds of beings. Designed to enable people of all walks of life to playfully unpack and resolve controversies, PoN provides a space where beings can have their existence renegotiated. The aim of PoN is to play as a team to explore and decide on potential good common worlds in which more indispensable beings can exist than if the status quo is continued. By playing PoN iteratively through rounds, each having four stages, the players gradually construct PoN - a planet mirroring ‘real worlds’. The four stages provide a novel combination of identification, representation, meditation, prioritization, mapping, individual and group ideation, proposal formulation, and decision-making; only to ask the players to challenge and change PoN to fit their requirements after each round. What follows is taken directly from the manual.


Sign in / Sign up

Export Citation Format

Share Document