Facebook Reveals Racism in its Algorithm

Facebook is facing severe backlash after its AI video recommendation system mislabelled a video of Black men as “primates.” Facebook users who watched the video were prompted to “keep seeing videos about Primates.” Posted on June 27, 2021 by the British tabloid Daily Mail, the video features clips of Black men in altercations with white civilians and police officers. Nothing in the video is related to primates. 

A Facebook spokesperson reported they disabled the whole topic AI recommendation feature that produced the label as soon as they became aware of their “unacceptable error.” The tech giant says they are investigating the cause of the error to prevent future incidents and apologize to anyone who saw the offensive recommendation. 

Unfortunately, this is only the latest example of “unacceptable errors,” ethical implications, and oppression maintained by artificial intelligence technologies. AI has been observed to carry racist and sexist biases in the past–notably facial recognition tools struggling to identify people of color. A 2018 study conducted by Joy Buolamwini, a researcher at the MIT Media Lab,​​ found that when photos are shown of white men, AI software correctly identifies them 99% of the time, but when shown photos with people with darker skin tones, errors increase dramatically. Buolamwini also found that when photos of women with darker skin are presented, AI software accurately recognizes them only 65% of the time. 

In 2015, Google’s photo app mistakenly categorized photos of Black people as ‘gorillas.’ After apologizing, the tech company appeared to have fixed the biased algorithm. However, over two years later, Wired discovered that Google’s solution was to censor the words ‘gorilla,’ ‘chimp,’ ‘chimpanzee’ and ‘ monkey’ from searches rather than actually fixing the root of the issue–that AI tools carry racial biases from their creators.

These types of facial recognition software failures have to lead to dangerous consequences for people of color. In January 2020, Robert Julian-Borchak Williams was wrongfully arrested by the Detroit Police Department after a flawed match from a facial recognition algorithm mistook him for another Black man. 

Last April, the U.S. Federal Trade Commission published an article highlighting the serious problems that AI tools have shown and warned that these tools may violate consumer protection laws. “Advances in artificial intelligence (AI) technology promise to revolutionize our approach to medicine, finance, business operations, media, and more. But research has highlighted how apparently “neutral” technology can produce troubling outcomes – including discrimination by race or other legally protected classes.”

As these tools continue to grow, AI shows great promise and efficiency across various sectors. However, AI innovators must create inclusive and accessible spaces to ensure that a diverse network of people are contributing to the design of these tools while consistently assessing algorithms to preemptively catch where biases can be adopted into algorithms.  

Read more at The Verge and the New York Times