recidivism prediction
Recently Published Documents


TOTAL DOCUMENTS

35
(FIVE YEARS 11)

H-INDEX

11
(FIVE YEARS 2)

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Felix G. Rebitschek ◽  
Gerd Gigerenzer ◽  
Gert G. Wagner

AbstractThis study provides the first representative analysis of error estimations and willingness to accept errors in a Western country (Germany) with regards to algorithmic decision-making systems (ADM). We examine people’s expectations about the accuracy of algorithms that predict credit default, recidivism of an offender, suitability of a job applicant, and health behavior. Also, we ask whether expectations about algorithm errors vary between these domains and how they differ from expectations about errors made by human experts. In a nationwide representative study (N = 3086) we find that most respondents underestimated the actual errors made by algorithms and are willing to accept even fewer errors than estimated. Error estimates and error acceptance did not differ consistently for predictions made by algorithms or human experts, but people’s living conditions (e.g. unemployment, household income) affected domain-specific acceptance (job suitability, credit defaulting) of misses and false alarms. We conclude that people have unwarranted expectations about the performance of ADM systems and evaluate errors in terms of potential personal consequences. Given the general public’s low willingness to accept errors, we further conclude that acceptance of ADM appears to be conditional to strict accuracy requirements.


2021 ◽  
Vol 4 ◽  
Author(s):  
Angelika Adensamer ◽  
Lukas Daniel Klausner

Digitisation, automation, and datafication permeate policing and justice more and more each year—from predictive policing methods through recidivism prediction to automated biometric identification at the border. The sociotechnical issues surrounding the use of such systems raise questions and reveal problems, both old and new. Our article reviews contemporary issues surrounding automation in policing and the legal system, finds common issues and themes in various different examples, introduces the distinction between human “retail bias” and algorithmic “wholesale bias”, and argues for shifting the viewpoint on the debate to focus on both workers' rights and organisational responsibility as well as fundamental rights and the right to an effective remedy.


Author(s):  
William T. Miller ◽  
Christina A. Campbell ◽  
Jordan Papp ◽  
Ebony Ruhland

Scholars have presented concerns about potential for racial bias in risk assessments as a result of the inclusion of static factors, such as criminal history in risk assessments. The purpose of this study was to examine the extent to which static factors add incremental validity to the dynamic factors in criminogenic risk assessments. This study examined the Youth Level of Service/Case Management Inventory (YLS/CMI) in a sample of 1,270 youth offenders from a medium-sized Midwestern county between June 2004 and November 2013. Logistic regression was used to determine the predictive validity of the YLS/CMI and the individual contribution of static and dynamic domains of the assessment. Results indicated that the static domain differentially predicted recidivism for Black and White youth. In particular, the static domain was a significant predictor of recidivism for White youth, but this was not the case for Black youth. The dynamic domain significantly predicted recidivism for both Black and White offenders, and static risk factors improved prediction of recidivism for White youth, but not for Black youth.


2020 ◽  
pp. 1-21
Author(s):  
Justin B. Biddle

Abstract Recent scholarship in philosophy of science and technology has shown that scientific and technological decision making are laden with values, including values of a social, political, and/or ethical character. This paper examines the role of value judgments in the design of machine-learning (ML) systems generally and in recidivism-prediction algorithms specifically. Drawing on work on inductive and epistemic risk, the paper argues that ML systems are value laden in ways similar to human decision making, because the development and design of ML systems requires human decisions that involve tradeoffs that reflect values. In many cases, these decisions have significant—and, in some cases, disparate—downstream impacts on human lives. After examining an influential court decision regarding the use of proprietary recidivism-prediction algorithms in criminal sentencing, Wisconsin v. Loomis, the paper provides three recommendations for the use of ML in penal systems.


2020 ◽  
Vol 34 (10) ◽  
pp. 13839-13840
Author(s):  
Aria Khademi ◽  
Vasant Honavar

ProPublica's analysis of recidivism predictions produced by Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) software tool for the task, has shown that the predictions were racially biased against African American defendants. We analyze the COMPAS data using a causal reformulation of the underlying algorithmic fairness problem. Specifically, we assess whether COMPAS exhibits racial bias against African American defendants using FACT, a recently introduced causality grounded measure of algorithmic fairness. We use the Neyman-Rubin potential outcomes framework for causal inference from observational data to estimate FACT from COMPAS data. Our analysis offers strong evidence that COMPAS exhibits racial bias against African American defendants. We further show that the FACT estimates from COMPAS data are robust in the presence of unmeasured confounding.


2020 ◽  
Vol 2 (1) ◽  
Author(s):  
Cynthia Rudin ◽  
Caroline Wang ◽  
Beau Coker

Sign in / Sign up

Export Citation Format

Share Document