Tech

Facebook just banned blackface and anti-Semitism from the platform. Today, in 2020.

22.5M

The number of “hate speech” posts Facebook removed between April and June.

Facebook

picture alliance/picture alliance/Getty Images

Facebook today released the sixth edition of its Community Standards Enforcement Report, providing metrics on how often content is removed from its platforms, including Instagram. The company also updated its hate speech policy to explicitly ban depictions of blackface and anti-Semitic stereotypes.

Both were already prohibited, but the company says it wanted to put those examples in writing so content moderators understand to remove it. In a blog post, Facebook elaborated:

We are in the process of identifying the specific stereotypes that most often show up on Facebook and Instagram so that we can establish a clearly defined list of harmful stereotypes that we will remove globally, and will begin by banning: 1) blackface, which is part of a history of dehumanization, denied citizenship and efforts to excuse and justify state violence; and 2) stereotypes about the power of Jews as a collective in the form of Jewish people running the world or controlling its major institutions, which reflect hatred toward Jews. This type of content has always gone against the spirit of our hate speech policies, but writing a policy to capture this content equitably and at scale has proven difficult.

The new report covers April through June of this year, and shows how Facebook has struggled to police certain types of content after it sent much of its moderation staff home due to the coronavirus.

Big increase in hate speech takedowns — Facebook targeted much more content deemed "hate speech" this time around, taking action on 22.5 million posts versus 9.6 million in the first three months of the year. It has recently been dealing with a boycott from major advertisers who say they don't want their campaigns being displayed alongside hate speech.

Are they related? It's possible. Some of those brands who participated have already said they will resume spending on Facebook properties.

On the other hand, Facebook took action on much less content pertaining to suicide and self-injury in the past three months. It took action on 911,000 thousand pieces of such content, versus 1.7 million in the January to March period. Its constrained workforce also caused it to limit the circumstances under which users can appeal violations.

Facebook took action against much more content deemed "hate speech."Facebook

Coronavirus removals — In its report Facebook says that during the three month period it removed over seven million pieces of harmful coronavirus misinformation from Facebook and Instagram. It placed fact-checking labels on an additional 98 million pieces of coronavirus misinformation that it didn't remove. The company for the first time ever removed a post from President Trump last week for spreading misinformation about the coronavirus.

The platform problem — Facebook has long stated that it wants to rely more on algorithms to proactively catch violating content, and the company says 98 percent of the suicide and self-harm content it removed in the quarter was caught by its algorithms before any user reported it. Across all categories of content, Facebook now says that 94.5 percent of removals are done proactively, up from 50 percent in 2018.

Still, the drop in action on certain types of content demonstrates how human language is nuanced and complicated — computers have a hard time deciding whether a post is genuinely threatening self-harm or merely discussing the topic, so humans still need to intervene. The problem for Facebook is that at its scale, allowing one out of every five dangerous posts to slip through still equates to a lot. And then there's divisive content that different factions can't agree is acceptable or not. Facebook will likely always get pushback from one group or another no matter what it decides to do.