Facebook Hasn’t Learned Its Lesson: Social Media Giant Again Falls Flat in Latest Hate Speech Detection Test
Everything You Need to Know in Less Than 50 Words
Tech giant Facebook continues to drop the ball on detecting hate speech in their promoted content. Two non-profits tested Facebook’s systems to protect against hate speech in advertisements. Through a set of their ads with “obviously” violent hate speech, the groups found Facebook approved the ads for posting.
Tell Me More
A pair of non-profit groups looking to test Facebook and parent company Meta’s attempts to curb hate speech through advertisements found the platform approved their ads calling for ethnic cleansing without pushback.
Global Witness and Foxglove say they purchased a series of 12 text ads calling for the murder of Ethiopians in three of the country’s major ethnic groups: the Amhara, the Oromo, and the Tigrayans.
Despite Facebook’s efforts to fix issues involving hate speech on the platform, the non-profit ads received approval from the company’s systems. The ads were never published, however, as both groups let Meta know their systems failed instead.
Global Witness made headlines earlier this year when they tested another series of hate speech-filled advertisements for Myanmar. Again, those ads were approved but not published.
Tech giant Facebook continues to drop the ball on detecting hate speech in their promoted content.
Non-Profits Call on Facebook to Try Harder to Curb Hate Speech
Global Witness and Foxglove released the results of their findings to the media, calling for the company to “vastly upscale” its moderation operation.
“When ads calling for genocide in Ethiopia repeatedly get through Facebook’s net – even after the issue is flagged with Facebook – there’s only one possible conclusion: there’s nobody home. Years after the Myanmar genocide, it is clear Facebook hasn’t learned its lesson,” Foxglove director Rosa Curling said in the statement.
Facebook Claims ‘Industry-Leading’ Systems Are Working
Meta says they have been particularly monitoring the situation in Ethiopia, calling their efforts to curb hate speech in the country “industry-leading.”
The company claims between May and October 2021 they removed 92,000 pieces of content from Facebook and Instagram involving Ethiopia that violated their Community Standards.
Meta says they’ve spent two years working on the situation in Ethiopia to prevent violent rhetoric from inflaming the less than 10 percent of the population who use the platform there.
“Ethiopia is an especially challenging environment to address these issues, in part because there are multiple languages spoken in the country,” the company said in a statement.
Facebook Has Spent Years Trying to Stamp Down Hate Speech
Facebook’s algorithms received an overhaul several times to curb hate speech, especially following the 2016 American presidential election.
The company’s “worst of the worst” project attempted to overhaul automatic content moderation and assign numeric values to specific comments against ethnic groups or people with different sexual orientations.
Facebook Continues to Receive Criticism
Despite their efforts, Facebook still receives criticism for how it handles content moderation. According to a Wired article, Meta claims to remove 90 percent of hate speech on Facebook, but the figure allegedly is grossly inflated.
The criticism comes following a whistleblower’s report given to the SEC to push for greater laws and regulations over the platform.
Source: Associated Press