AI, Racism, and Society

In a fiery spoken-word piece performed by Joy Buolamwini, she asks artificial intelligence a simple question: “Often forgetting to deal with// Gender, race, and class, again I ask Ain’t I a Woman?” This sentiment rings true as visuals play of multiple high profile black women, such as Michelle Obama, Oprah Winfrey, and Serena Williams, being misidentified by various facial recognition software. The AI jumps to the conclusion that they are all men. This situation is evidence of underlying racial bias, and it’s not an isolated incident; racial bias is persistent in AI. 

In 2015, facial recognition technology from Google identified black women as gorillas. A viral video in 2017 showcased a soap dispenser that automatically dispensed soap to only white people. Studies have found that self-driving cars are more likely to crash into black pedestrians. These cases are in direct contrast with what humans design AI to do: to mimic the intelligence of humans and perform tasks without human error or bias. 

But one has to look at the starting point of AI development: humans and the data we feed it. For all it’s intelligence, AI (and data) in many ways can still be a reflection of ourselves – whether it be our history, our society, or the computer scientist developing AI – and the prejudice that exists in these areas. It’s not impossible to think that some of our biases slip into the technology. At times, AI merely spits out some of the prejudices that exist within these spheres. As we give AI more decision-making capacity in our society, it has infiltrated its ways into various fields, such as criminal justice, and it poses significant risks for the minorities targeted by racism.

What is Artificial Intelligence? How Does AI Work? | Built In

How AI Learns (Racial) Bias

Data allows AI to function– AI learns from data, then performs tasks and makes decisions using it. Humans give AI this data, so in some senses, we are the starting point of bias. One way in which biases may slip in as AI collects data is when the AI gathers information from the mass public. In this way, the AI picks up on existing racism, sexism, and other bias that exists among members of society. 

Microsoft’s 2016 Twitter bot, “Tay,” was a perfect example of this. They made Tay to interact with people on Twitter and mimic how teenagers talked. However, in just 16 hours, Tay became racist, denied the Holocaust, and was pushing genocide. Microsoft did not set out for Tay to tweet these inflammatory sentiments. Instead, Twitter users tweeted out these disgusting sentiments. Tay’s job was to imitate the way people talked on Twitter, and it did. Tay reflected the racism and bigotry that existed in society. 

Microsoft's disastrous Tay experiment shows the hidden dangers of AI —  Quartz

AI will sometimes use a form of machine learning known as word embedding, where they learn the meaning of words through associating it with the words that appear next to it in sentences. In a study by computer scientists Aylin Caliskan and her colleagues, they found that AI learned how to be racist and sexist using this technique when it scoured billions of words on the internet. The program concluded that African-American names were less associated with sounding pleasant than white-sounding ones. Were we to take what this program deduced as truth, we could say that African-American women are unpleasant. But that is untrue. This verdict came from the way people talked on the internet. Again, the program’s racism stemmed from learning the sentiments that humans put on the internet. Like Tay, it did not inherently set out to be racist–existing human bias made it so.  

The second source of bias might be easier to identify: AI developers. An important position to look at is the demographic of people creating AI– it is majorly male and majorly white. The AI Now Institute reports that 80% of AI professors are white. According to Metro, in 2016, ten technological companies in Silicon Valley employed zero black women, and three had no black employees. What happens when there is a lack of diversity among developers is that the AI may receive restrictive data sets; developers don’t train the programs on data that is representative of the whole human spectrum. Such was the case with the wrong identifications presented in the “AI, Ain’t I A Woman?” poem or with self-driving cars that couldn’t identify black people as pedestrians. AI scientists didn’t give these systems enough data representative of darker skin tones in order from them to be able to learn from them. Their focus was on the people who looked like them. Because of this, facial recognition repeatedly fails to identify communities of darker color. A lack of diversity has long plagued the tech world. This lack of diversity has seeped into AI development and exposes the implicit bias (and implicit racism) in the field. 

AI’s Racism In The Real World

Even with these alarming flaws, AI has made its way into fields that affect many minorities that the technology holds biases against– a crucial one being criminal justice. There are noble reasons for wanting to use AI in this sphere. For example, the US incarcerates more people than anyone else in the world; in 2016, there were 2.2 million people in jails and 4.5 million in correctional facilities. These harrowing statistics may create the need for wanting to create a system in which we can efficiently decide how we should move people around the justice system. AI’s automated decisions may seem like the answer, but of course, its underlying bias is a cause for concern. Nonetheless, AI has been used to decide where to deploy the police, for facial recognition, and for sentencing decisions based on recidivism, or the tendency of a convicted criminal to reoffend.

There is certainly ample evidence to show that facial recognition often fails for darker individuals. It continuously misidentifies black people and women. Again, a group unrepresented among those developing the technology. When placed in the hands of the police, facial recognition misidentifications could lead to unnecessary encounters with law enforcement. It also has the potential to judge individuals coming from disadvantaged backgrounds at higher rates. Confirmation bias is also a problem– imagine a situation where body cameras start to use facial recognition. In these situations, if facial recognition targets minorities as being a greater risk, this can confirm existing racial bias that may exist on the part of the cop. This situation is dangerous as it may incite violence on their part.

Another way the criminal justice system may use AI happens after the crime: for risk assessment. Risk assessment algorithms essentially calculate a score on how likely a person is to re-offend after taking in their profile. Judges may use this score to make decisions about the length of sentencing and types of rehabilitation one should receive, among others. The idea is that using AI for these decisions will get rid of human bias. Of course, as we’ve seen, prejudice exists in AI. However, where it stems from criminal risk assessment programs is slightly different than in something like facial recognition technology or Aylin Caliskan’s machine learning study. The bias still stems from data, but in this case, it uses historical crime data. In this way, the AI picks up correlations between demographics and crime. For example, people of low-income, black people, and/or minorities. While this may be true, we must note that, historically, law enforcement has disproportionately over-policed these demographics. So that data in itself that the AI pick up is rooted in oppressive history. It begs the question: Is it right to punish them for what, in many ways, they have no control?

This history repeats itself as black people, low-income people, and other historically targeted demographics will often receive higher criminal risk assessment scores. The state of Wisconsin used one such risk assessment program called COMPAS. A study by ProPublica found that this program was biased against black people. It gave them, on average, a higher recidivism rate of 45% versus a 25% rate for white people. Because they are more likely to get a higher recidivism rate, using this program, black people would also be more likely to have harsher punishments. 

In some ways, this creates somewhat of a cycle of oppression: the AI will only continue to victimize historically targeted people. This cycle would further fuel biased data that may be hard to escape. If not used with care, this may continue to target black people and minorities, repeating unfair history.

What Lies Ahead

Solutions exist in terms of creating AI that represents the whole spectrum of people. For one, increased diversity of AI engineers. Minorities will bring their experiences and knowledge, ensuring that algorithms receive all the necessary data for them to work for everyone. This diversity will help create unbiased data sets and algorithms. Having a diverse group of people in the room will ensure AI is being tested on a diverse range of people and will work for everyone.

The algorithms behind AI are also often not transparent. A lack of transparency creates a situation where we can’t fully understand why we get the answers we do from AI. More transparent algorithms will allow humans to question the accuracy of these conclusions because they know how it works and the inherent flaws. 

Similarly, as humans, we must also understand that it’s not perfect technology. Especially in areas like risk assessment for criminal justice, the data the AI uses is representative of historical trends. We must question why those trends exist (over-policing, underserved neighborhoods, etc.?) and if they should continue to exist. Humans need to ask and answer these questions; we can’t necessarily leave it to AI. 

When given historical datasets, AI will undoubtedly pick up on the patterns that exist and reflect them, whether they prevail because of inequity or not. When AI is by design given restrictive data sets, it will spit out results that deny the existence of a diverse range of humans. It’s not that AI does not have a place in our world. It does. There have been many advancements in the field that have indeed helped humans. However, we, as people, must examine ourselves and our society to see how racism, sexism, and “other -isms” have slipped into various aspects of this technology. And when we use AI, we must question if it is only further perpetuating the prejudice that exists in our world.


SOURCES:

https://www.vox.com/science-and-health/2019/1/23/18194717/alexandria-ocasio-cortez-ai-bias

https://www.propublica.org/article/breaking-the-black-box-how-machines-learn-to-be-racist?word=Trump

https://towardsdatascience.com/racist-data-human-bias-is-infecting-ai-development-8110c1ec50c

https://www.technologyreview.com/2019/01/21/137783/algorithms-criminal-justice-ai/#:~:text=Using%20historical%20data%20to%20train,the%20mistakes%20of%20the%20past.


Lucy Damachi

Lucy Damachi is a 16 year old high school junior in Nigeria. She has interests in climate justice, racial justice, indigenous rights, and sustainable development. Growing up in Nigeria has shaped many of these passions. She is a member of her school’s Student Council, served as vice president for her school’s Green Club, and is in love with community service. In her free time, she love to sing along to her favorite songs, read all sorts of literature, and explore the world of spoken word poetry. She is very excited to be working with the Zenerations team!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: