Artificial Inequalities

Three questions to Ruha Benjamin, author of “Race After Technology: Abolitionist Tools for the New Jim Code”

What do you think are the worst effects of bias in Artificial intelligence?

The obvious harms are the way that technologies are used to specifically reinforce certain kinds of citizen scoring systems and ways in which our existing inequalities get reinforced and amplified through technology. That could be seen as the worst of the worst.

The worst of the worst is when we unwittingly reinforce various forms of oppression in the context of trying to do good

But for me the worst of the worst is when we unwittingly reinforce various forms of oppression in the context of trying to do good. I describe this in the book as “techno-benevolence”. It’s an acknowledgement that humans are biased. We discriminate. We have all kinds of institutionalised forms of inequalities that we take for granted, and here you have technologists who say, ”We have a fix for that. If we just employ this software program or just download this app or just include this system in your institution or your company, we can go around that bias”.

 

Another disturbing element in your book is that certain technologies have been created to judge how a person looks and they are technically not geared up to recognise and perceive differences between people who have dark skin

Facial recognition software are being adopted not just by police but also by stores that are using this to identify people who look like criminals. And so now you have an entire technical apparatus that is facilitating this and so for some people, the first layer was just a question of “Does this technology actually work?

These tools in big crowds and look for people who are exercising their right to protest, that will make people more reluctant to get involved in the democratic process

A number of researchers have shown that in fact it’s very bad at identifying people who are darker-skinned, black women in particular. My colleague Joy Buolamwini at Massachusetts Institute of Technology has demonstrated this really effectively and so just at the level of effectiveness, it’s worse at identifying non-Whites, non-males. Then the added issue is that even if it was perfectly effective, and it could identify everyone perfectly, would we still want it? We have places like San Francisco that have banned it from use among their law enforcement. Other cities are considering legislation and more and more people who are otherwise supporters of more and more types of automated systems – when it comes to facial recognition, they understand the nefarious ways that it could be used. For example, it can be used to dampen social protest. If you can deploy these tools in big crowds and look for people who are exercising their right to protest, that will make people more reluctant to get involved in the democratic process, in the process of holding politicians accountable, if there are these technologies that are surveilling them at a distance. These are the next-layer questions that we have to wrestle with.

 

But it’s not fair to blame the technology, is it? Surely the datasets on which these Artificial Intelligence were trained provide the information on which they base all their decisions. If you could fix the input to the AI then the output would also follow in a more racially balanced way.

It’s a great question and I think you’re right that blame is not necessarily the right framework. But I do think that there’s plenty of responsibility to go around. At many different points there are places where we could be making better decisions. Yes, the existing data is one point. Yes, the input data is biased because of widespread racial profiling practices, so that if you’re black or Latinx you’re more likely to have a criminal record. Therefore, you are training the software to associate criminality with these racial groups.

But another point of responsibility or place we have to look is at the level of code, where you’re weighting certain types of factors more than others and so for example if you live in a community in which there’s a high unemployment rate, many algorithms in the criminal justice system take that as meaning you’re more at risk for being a repeat offender. So the unemployment rate of your community is then associated with you being a higher risk. Now that’s a problem! You have no control over the high unemployment rate. What if we trained our algorithms to look at how hospitals produce risk rather than saying “This individual is high risk” and that’s a different way of orienting the use of technology and also thinking about where the danger in society lies. It’s not about individual level danger. It’s about how our social policies and our social order produce danger for different communities at very different rates also.

Ruha Benjamin is Associate Professor of African American Studies at Princeton University.

Race After Technology: Abolitionist Tools for the New Jim Code is available now, and published by Polity.