Facebook apologises for classifying black men as primates
This is an authentic article written by BramvdnHeuvel.
Estimated reading time: 2 mins.
This is an authentic article written by BramvdnHeuvel.
Estimated reading time: 2 mins.
Artifical intelligences can have racial biases. Caucasian faces are 10 to 100 times more likely to be correctly recognized and identified than African-American faces and Asian faces, which has already led to wrongful accusations, arrestations and even imprsionments because an artificial intelligence couldn't correctly match a non-white face.
Many students have experienced this first-hand during the pandemic. Many companies like Proctorio have made large successes at universities and schools, where teachers instruct their students to install software that checks whether they're making a test honestly. The software was more likely to blame people of color of cheating because the software couldn't recognize their faces. Such proctoring software has shown to reinforce white supremacy, sexism, ableism and transphobia.
Facebook's apology concerned black men in altercations with white civilians and police officers, according to The New York Times. An image recognition AI classified the clips as footage of monkeys or primates, even though the videos had nothing to do with either.
How does this happen? A bias in an artificial intelligence is usually a result of a biased training set. As Joy Buolamwini explains in her TED talk, where she explains how face recognition software couldn't recognize her face. The same way, a data set where over 80 percent of the faces is white may have a harder time recognizing people of different skin colors and may resort to classifying that faces as something close to a human - like monkeys and other primates.
A similar scandal to Facebook's recent mistake was seen in 2015, when Google mistakenly identified black people as gorillas in Google Photos. However, instead of changing the artificial intelligence, Google Photos got rid of words like "gorilla", "chimp", "chimpanzee" and "monkey". Although this means that humans will no longer be identified as monkeys on Google Photos, it suggests that the underlying problems with the AI haven't been fixed yet.
Although the Facebook AI's mistake comes from an underlying issue that plays on a societal level, Facebook has a record of abuse, often racial.
Examples like these leave plenty of room to wonder how sincerely Facebook takes the primate mismatch -- and to wonder whether the apoligy is simply a PR action to prevent further controversy.
Image recognition is a very useful tool that can help us improve our daily lives, but innovation must not come with discrimination or the reinforcement of bigotry. The classification of a human as an ape is of such indignity, and it's nothing but obvious that Facebook as a company should be held responsible.
It is only trivial that a simple algorithm can lead to major consequences on a large platform, and hence should algorithms be treated that way. A face recognition algorithm shouldn't simply be something you can just throw onto millions of people, and categorizing thousands of people into the gorilla section is as unacceptable as publicly denouncing those people as monkeys.
Don't let an apology be enough, especially for a company with such a track record of racial abuse.