Reports have emerged that Facebook’s AI has labeled a video projected by a black man as a primate and prompted users to watch more primate videos. In a video posted in June, white people, including police, were talking face-to-face with black people.
Facebook immediately issued an apology statement, stating that the AI behavior was an error, disabling the post recommendation function by AI. The company has been improving its AI, but it’s still not perfect, it said, it’s going to be discontinued for the time being, and we need to study how to improve it further to make sure this doesn’t happen again. He added that he apologized to the user who saw this offensive phrase.
Face recognition by AI is known to often have lower recognition accuracy among people of color. In 2015, Google AI responded that it was a gorilla while recognizing black photos, and Google apologized.
In the United States, in April, the Federal Trade Commission (FTC) warned that AI tools biased toward racial and gender awareness could violate cost protection laws if they were used to make decisions about credit card or employment or residential loan review. Related information can be found here.