Comment on “The Influence of Risk Assessment Instrument Scores on Evaluators’ Risk Opinions and Sexual Offender Containment Recommendations”

2017 ◽  
Vol 44 (9) ◽  
pp. 1236-1241
Author(s):  
Christopher Lobanov-Rostovsky

The work of the Colorado Sex Offender Management Board (SOMB) has been called into question as a result of the manuscript “The Influence of Risk Assessment Instrument Scores on the Evaluators’ Risk Opinions and Sexual Offender Containment Recommendations” published in Criminal Justice and Behavior (2017). This response covers the following areas: significant nomenclature problems used to describe the Adult Standards and Guidelines, the dated nature of the SOMB citations in the manuscript, the flaws in the interpretation of the use of the 17 SOMB risk factors and the SOMB policy related to risk assessment, a potential confounding variable that may explain the results obtained, and finally the work of the SOMB to foster the use of validated risk assessment instruments and evidence-based policies and practices. The SOMB takes pride in providing up-to-date, research-supported practices for its providers and would never intentionally do otherwise, as suggested by the article.

2017 ◽  
Vol 44 (9) ◽  
pp. 1213-1235 ◽  
Author(s):  
Katherine E. McCallum ◽  
Marcus T. Boccaccini ◽  
Claire N. Bryson

In Colorado, evaluators conducting sex offender risk assessments are required to assess 17 risk factors specified by the state’s Sex Offender Management Board (SOMB), in addition to scoring actuarial risk assessment instruments. This study examined the association between instrument scores, the 17 SOMB risk factors, and evaluator opinions concerning risk and need for containment in 302 Colorado cases. Evaluators’ ratings of risk indicated by noninstrument factors were often higher than their ratings of risk indicated by instrument results, but only their ratings of noninstrument factors were independently predictive of containment recommendations. Several of the most influential noninstrument factors (e.g., denial, treatment motivation) have been described by researchers as potentially misleading because they are not predictive of future offending. Findings highlight the need for more studies examining the validity of what risk assessment evaluators actually do, as opposed to what researchers think they should do.


1994 ◽  
Vol 40 (2) ◽  
pp. 154-174 ◽  
Author(s):  
Patricia M. Harris

This article compares the predictive accuracy of a traditional, objective probation risk assessment instrument with the considerably more subjective, interview-based Client Management Classification (CMC) System, a tool with no previously noted applications to the prediction of risk. Subjects of the study were probationers under supervision in Austin, Texas. Results indicated that the CMC performed far more satisfactorily than did the traditional instrument. The CMC was found to be particularly successful in minimizing false positives (i.e., probationers incorrectly predicted to be high risks). The results suggest that offender risk assessment instruments of a national scope are possible. Implications for assessment and probation supervision practices are considered.


1986 ◽  
Vol 32 (3) ◽  
pp. 367-390 ◽  
Author(s):  
Thomas R. Kane

Many prison classification systems include risk assessment instruments designed to assign individuals to institutions varying in security features or to levels of supervision differing in staffing patterns and the extent of inmate privileges. Decision criteria that make up classification instruments are selected to measure inmate attributes as information that predicts the occurrence or rates of prison disciplinary problems. The present article is an introduction to research designs and methods for use in the validation of classification instruments. Hypothetical validity findings are presented to demonstrate their critical role in the development of a classification system. Finally, it is concluded that validation research, whether for management or scientific purposes, is optimally conducted in a program of research designed to evaluate the risk assessment instrument in the context of other components of the classification system and environmental variables that influence validity criterion behavior.


2021 ◽  
pp. 009385482110405
Author(s):  
Samantha A. Zottola ◽  
Sarah L. Desmarais ◽  
Evan M. Lowder ◽  
Sarah E. Duhart Clarke

Researchers and stakeholders have developed many definitions to evaluate whether algorithmic pretrial risk assessment instruments are fair in terms of their error and accuracy. Error and accuracy are often operationalized using three sets of indicators: false-positive and false-negative percentages, false-positive and false-negative rates, and positive and negative predictive value. To calculate these indicators, a threshold must be set, and continuous risk scores must be dichotomized. We provide a data-driven examination of these three sets of indicators using data from three studies on the most widely used algorithmic pretrial risk assessment instruments: the Public Safety Assessment, the Virginia Pretrial Risk Assessment Instrument, and the Federal Pretrial Risk Assessment. Overall, our findings highlight how conclusions regarding fairness are affected by the limitations of these indicators. Future work should move toward examining whether there are biases in how the risk assessment scores are used to inform decision-making.


2005 ◽  
Vol 7 (2) ◽  
pp. 57-85 ◽  
Author(s):  
Susan Brumbaugh ◽  
Danielle M. Steffey

Since the late 1970s, agencies responsible for supervising offenders in the community have used assessment instruments as a way to classify offenders according to levels of risk and need. This, in turn, has provided a basis for assigning offenders to levels of community supervision. Despite evidence that risk and needs assessment instruments are not universal and should be validated for use with a particular population, most agencies currently using them have adopted existing instruments without determining whether the instruments are valid for their offender populations. Furthermore, agencies that do validate instruments for their populations often do so under less than ideal circumstances, facing constraints and data limitations that affect research decisions. This article describes how the State of New Mexico dealt with such constraints and limitations to construct a validated risk assessment instrument and illustrates the continued importance of validating risk assessment instruments, highlighting issues related to their implementation.


2011 ◽  
Vol 38 (6) ◽  
pp. 541-553 ◽  
Author(s):  
Melinda D. Schlager ◽  
Daniel Pacheco

The Level of Service Inventory—Revised (LSI-R) is an actuarially derived risk assessment instrument with a demonstrated reputation and record of supportive research. It has shown predictive validity on several offender populations. Although a significant literature has emerged on the validity and use of the LSI-R, no research has specifically examined change scores or the dynamics of reassessment and its importance with respect to case management. Flores, Lowenkamp, Holsinger, and Latessa and Lowenkamp and Bechtel, among others, specifically identify the importance and need to examine LSI-R reassessment scores. The present study uses a sample of parolees ( N = 179) from various community corrections programs that were administered the LSI-R at two different times. Results indicate that both mean composite and subcomponent LSI-R scores statistically significantly decreased between Time 1 and Time 2. The practical, theoretical, and policy implications of these results are discussed.


Sign in / Sign up

Export Citation Format

Share Document