2 October 2016

Motherboard: Does Crime-Predicting Software Bias Judges? Unfortunately, There’s No Data

For centuries judges have had to make guesses about the people in front of them. Will this person commit a crime again? Or is this punishment enough to deter them? Do they have the support they need at home to stay safe and healthy and away from crime? Or will they be thrust back into a situation that drives them to their old ways? Ultimately, judges have to guess.

But recently, judges in states including California and Florida have been given a new piece of information to aid in that guess work: a “risk assessment score” determined by an algorithm. These algorithms take a whole suite of variables into account, and spit out a number (usually between 1 and 10) that estimates the risk that the person in question will wind up back in jail. [...]

I’ve been doing some research of my own into these recidivism algorithms, and when I read the ProPublica story, I came out with the same question I’ve had since I started looking into this: these algorithms are likely biased against people of color. But so are judges. So how do they compare? How does the bias present in humans stack up against the bias programmed into algorithms? [...]

All the researchers I talked to who study sentencing, risk assessment and these algorithms said they didn’t know of a single study that compared the sentencing patterns judges who do and don’t use these scores. There are studies out there on a variety of risk-assessment tools that look at questions of accuracy and reliability. There are plenty of studies that compare the algorithms’ guesses about recidivism with who really did return to jail. But there’s nothing that compares judges with and without the scores. Which means that states are using these scores in a variety of contexts without having any idea how they might impact decisions that impact people’s lives.



No comments:

Post a Comment