Efficiency and fairness in recurring data-driven risk assessments of violent recidivism

Author(s):  
Marzieh Karimi-Haghighi ◽  
Carlos Castillo
2007 ◽  
Vol 34 (3) ◽  
pp. 297-313 ◽  
Author(s):  
Grant T. Harris ◽  
Marnie E. Rice

Two studies herein address age, the passage of time since the first offense, time spent incarcerated, or time spent offense free in the community as empirically justified postevaluation adjustments in forensic violence risk assessment. Using three non-overlapping samples of violent offenders, the first study examined whether any of three variables (time elapsed since the first offense, time spent incarcerated, and age at release) were related to violent recidivism or made an incremental contribution to the prediction of violent recidivism after age at first offense was considered. Time since first offense and time spent incarcerated were uninformative. Age at release predicted violent recidivism but not as well as age at first offense, and it afforded no independent incremental validity. For sex offenders, age at first offense improved the prediction of violent and sexual recidivism. In the second study, time spent offense-free while at risk was related to violent recidivism such that an actuarial adjustment for the Violence Risk Appraisal Guide could be derived. The results support the use of adjustments (based on the passage of time) to actuarial scores, but only adjustments that are themselves actuarial.


2019 ◽  
Vol 46 (7) ◽  
pp. 939-960 ◽  
Author(s):  
Benny Salo ◽  
Toni Laaksonen ◽  
Pekka Santtila

We estimated the predictive power of the dynamic items in the Finnish Risk and Needs Assessment Form ( Riski- ja tarvearvio [RITA]), assessed by caseworkers, for predicting recidivism. These 52 items were compared to static predictors including crime(s) committed, prison history, and age. We used two machine learning methods (elastic net and random forest) for this purpose and compared them with logistic regression. Participants were 746 men who had and 746 who had not reoffended during matched follow-up periods from 0.5 to 5.8 years. Both RITA items and static predictors predicted general and violent recidivism well (area under the curve [AUC] = .74-.78), but to combine them increased discrimination only slightly over static predictors alone (ΔAUC = .01-.03). Calibration was good for all models. We argue that the results show strong potential for the RITA items, but that development is best focused on improving usability for identifying treatment targets and for updating risk assessments.


AI and Ethics ◽  
2021 ◽  
Author(s):  
Jacqui Ayling ◽  
Adriane Chapman

AbstractBias, unfairness and lack of transparency and accountability in Artificial Intelligence (AI) systems, and the potential for the misuse of predictive models for decision-making have raised concerns about the ethical impact and unintended consequences of new technologies for society across every sector where data-driven innovation is taking place. This paper reviews the landscape of suggested ethical frameworks with a focus on those which go beyond high-level statements of principles and offer practical tools for application of these principles in the production and deployment of systems. This work provides an assessment of these practical frameworks with the lens of known best practices for impact assessment and audit of technology. We review other historical uses of risk assessments and audits and create a typology that allows us to compare current AI ethics tools to Best Practices found in previous methodologies from technology, environment, privacy, finance and engineering. We analyse current AI ethics tools and their support for diverse stakeholders and components of the AI development and deployment lifecycle as well as the types of tools used to facilitate use. From this, we identify gaps in current AI ethics tools in auditing and risk assessment that should be considered going forward.


2020 ◽  
Vol 4 (1) ◽  
pp. 45-73 ◽  
Author(s):  
Tereza Kuldova

Artificial intelligence, deep learning and big data analytics are viewed as the technologies of the future, capable of delivering expert intelligence decisions, risk assessments and predictions within milliseconds. In a world of fakes, they promise to deliver ‘hard facts’ and data-driven ‘truth’, but their solutions resurrect ideologies of purity, embrace bogus science reminiscent of the likes of anthropometry, and create a deeply paranoid world where the Other is increasingly perceived either as a threat or as a potential imposter, or both. Social sorting in the age of intelligent surveillance acquires a whole new meaning. This article explores the possible effects of algorithmic governance on society through a critical analysis of the figure of the imposter in the age of intelligent surveillance. It links a critical analysis of new technologies of surveillance, policing and border control, to the extreme ethnographic example of paranoia within outlaw motorcycle clubs – organizations that are heavily targeted by new and old modes of policing and surveillance, while themselves increasingly embracing the very same logic and technologies themselves. With profound consequences. The article shows how in the quest for power, order, profit, and control, we are sacrificing critical reason and risk becoming as a society not unlike the paranoid criminal organizations.


2019 ◽  
Vol 2019 ◽  
pp. 1-8
Author(s):  
Paolo Santini ◽  
Giuseppe Gottardi ◽  
Marco Baldi ◽  
Franco Chiaraluce

Cyber risk assessment requires defined and objective methodologies; otherwise, its results cannot be considered reliable. The lack of quantitative data can be dangerous: if the assessment is entirely qualitative, subjectivity will loom large in the process. Too much subjectivity in the risk assessment process can weaken the credibility of the assessment results and compromise risk management programs. On the other hand, obtaining a sufficiently large amount of quantitative data allowing reliable extrapolations and previsions is often hard or even unfeasible. In this paper, we propose and study a quantitative methodology to assess a potential annualized economic loss risk of a company. In particular, our approach only relies on aggregated empirical data, which can be obtained from several sources. We also describe how the method can be applied to real companies, in order to customize the initial data and obtain reliable and specific risk assessments.


2018 ◽  
Author(s):  
Benny Salo ◽  
Toni Laaksonen ◽  
pekka santtila

We estimated the predictive power of the dynamic items in the Finnish Risk and Needs Assessment Form (RITA), assessed by case-workers, for predicting recidivism. These 52 items were compared to static predictors including crime committed, prison history, and age. We used two machine learning methods (elastic net and random forest) for this purpose and compared them with logistic regression. Participants were 746 men that had, and 746 that had not, reoffended during matched follow-up periods from 0.5 to 5.8 years. Both RITA-items and static predictors predicted general and violent recidivism well (AUC = .73 – .79), but combining them increased discrimination only slightly (ΔAUC = 0.01 – 0.02) over static predictors alone. Calibration was good for all models. We argue that the results show strong potential for the RITA-items but that development is best focused on improving usability for identifying treatment targets and for updating risk assessments.


Author(s):  
Freyja van den Boom

Abstract‘Telematics’ insurance is an example of data driven innovation in the insurance industry where data obtained from the vehicle (such as speed, time and location) is used to provide consumers with premiums based on their actual driving behavior. Despite the many benefits including more accurate risk assessments and premium setting, there are serious privacy concerns about the increased use of vehicle data for insurance purposes. The information requirements of the GDPR and the IDD could address some of these concerns in the context of telematics insurance. This research chapter concludes the analysis of the scope of these requirements by proposing the need for a broad interpretation for information to be made available in order to effectively help consumers make better, well informed decisions about insurance products and use of their personal data for insurance purposes.


Polymer News ◽  
2004 ◽  
Vol 29 (7) ◽  
pp. 220-223
Author(s):  
Charles Carraher, Jr.
Keyword(s):  

Polymer News ◽  
2004 ◽  
Vol 29 (2) ◽  
pp. 58-60
Author(s):  
Charles Carraher, Jr.
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document