Jennifer Skeem and Christopher Lowenkamp analyze trade-offs inherent in predicting recidivism
As part of Christopher Slobogin’s Special Issue on Implementation of Post-Conviction Risk Assessment, Jennifer Skeem and Christopher Lowenkamp analyze how alternative ways for "debiasing" risk assessment algorithms affect various tradeoffs in their article "Using Algorithms to Address Trade-Offs Inherent in Predicting Recidivism"
Although risk assessment has increasingly been used as a tool to help reform the criminal justice system, some stakeholders are adamantly opposed to using algorithms. The principal concern is that any benefits achieved by safely reducing rates of incarceration will be offset by costs to racial justice claimed to be inherent in the algorithms themselves. But fairness trade‐offs are inherent to the task of predicting recidivism, whether the prediction is made by an algorithm or human. Based on a matched sample of 67,784 Black and White federal supervisees assessed with the Post Conviction Risk Assessment, we compared how three alternative strategies for “debiasing” algorithms affect these trade‐offs, using arrest for a violent crime as the criterion . These candidate algorithms all strongly predict violent reoffending (areas under the curve = 0.71–72), but vary in their association with race (r = 0.00–0.21) and shift trade‐offs between balance in positive predictive value and false‐positive rates. Providing algorithms with access to race (rather than omitting race or “blinding” its effects) can maximize calibration and minimize imbalanced error rates. Implications for policymakers with value preferences for efficiency versus equity are discussed.