In her illuminating new book, Weapons of Math Destruction, the data scientist Cathy O’Neil describes how companies, schools, and governments evaluate consumers, workers, and students based on ever more abundant data about their lives. She makes a convincing case that this reliance on algorithms has gone too far: Algorithms often fail to capture unquantifiable concepts such as workers’ motivation and care, and discriminate against the poor and others who can’t so easily game the metrics.
Basing decisions on impartial algorithms rather than subjective human appraisals would appear to prevent the incursion of favoritism, nepotism, and other biases. But as O’Neil thoughtfully observes, statistical models that measure performance have biases that arise from those of their creators. As a result, algorithms are often unfair and sometimes harmful. “Models are opinions embedded in mathematics,” she writes. [...]
Indeed, the desire for an efficiency achieved through a never-ending gauntlet of appraisals is unhealthy. It exhausts workers with the need to perform well at all times. It pushes them into a constant competition with each other, vying for the highest rankings that, by definition, only a few can get. It convinces people—workers, managers, students—that individual metrics are what really matter, and that any failure to dole out pay raises, grades, and other rewards based on them is unfair. And it leads the better-off to judge those below them, honing in on all the evidence that tells them how much more they deserve than others do. In this way, “objective” models provide socially-acceptable excuses to blame certain people—most often, the poor and people of color—for a past that, once digitally noted, is never really forgotten or forgiven.
No comments:
Post a Comment