Does staff see what experts see? Accuracy of front line staff in scoring juveniles’ risk factors. (2017)

Abstract
Although increasingly complex risk assessment tools are being marketed, little is known about “real world” practitioners’ capacity to score them accurately. In this study, we assess the extent to which 78 staff members’ scoring of juveniles on the California-Youth Assessment and Screening Instrument (CA-YASI; Orbis Partners, Inc., 2008) agree with experts’ criterion scores for those cases. There are 3 key findings. First, at the total score level, practitioners manifest limited agreement (M ICC = .63) with the criterion: Only 59.0% of staff scores the tool with “good” accuracy. Second, at the subscale level, practitioners’ accuracy is particularly weak for treatment-relevant factors that require substantial judgment—like pro-criminal attitudes (M ICC = .52)—but good for such straightforward factors as legal history (M ICC = .72). Third, practitioners’ accuracy depended on their experience—relatively new staff’s scores were more consistent with the criterion than those with greater years of experience. Results suggest that attention to parsimony (for tools) and meaningful training and monitoring (for staff) are necessary to realize the promise of risk assessment for informing risk reduction.

Kennealy, P. J., Skeem, J. L., & Hernandez, I. R. (2017). Does staff see what experts see? Accuracy of front line staff in scoring juveniles’ risk factors. Psychological Assessment, 29(1), 26-34. http://dx.doi.org/10.1037/pas0000316