Lip service

Facebook activates damage control after ad boycott bleeds stock

On Friday, Mark Zuckerberg did what any CEO would after a heap of bad press and advertiser boycotts sent their company's stock tumbling: Backpedaled hard.

The Washington Post/The Washington Post/Getty Images

Mark Zuckerberg is attempting to save face, and Facebook's share price, from the pummeling that's come from multiple major companies financially boycotting the social network amid protests concerning its approach to policing hateful or violent content. On Friday, the CEO announced a change in Facebook's speech policy while touching on how to encourage Americans to vote, and the platform's efforts to fight disinformation and voter suppression.

Zuckerberg's lengthy announcement comes only hours after Unilever and Verizon each pulled advertising from Facebook. Days after Facebook removed and then reinstated Donald Trump's pro-violence post against Black Lives Matter protesters, multiple firms like Eddie Bauer, Magnolia Pictures, Ben and Jerry's, Rei, The North Face, and others have distanced themselves from the social network, citing concerns about the danger posed to marginalized groups by the companies lax policies on hate speech, misinformation, and incitement.

The moves led a more than 8 percent drop in Facebook's share price on Friday. Yet, despite his seemingly earnest plea to fight hate speech, Zuckerberg continues to disappoint when it comes to the simplest of regulations around harmful rhetoric. This announcement is the latest example.

Here's what will happen — Zuckerberg has announced that Facebook will ostensibly have a "higher standard" for hateful content in advertised material. This means that its artificial intelligence and human moderating strategies — which both have severe issues — will allegedly be quicker at spotting rhetoric that targets "people from a specific race, ethnicity, national origin, religious affiliation, caste, sexual orientation, gender identity or immigration status." This, presumably, was sparked by the Trump campaign's use of a well-known Neo-Nazi symbol in one of its ads, among other issues.

The Facebook CEO added, "We're also expanding our policies to better protect immigrants, migrants, refugees and asylum seekers from ads suggesting these groups are inferior or expressing contempt, dismissal or disgust directed at them."

The contradictions continue — Facebook doesn't just allow hate, it enables it to spread, tacitly endorsing it by fueling its proliferation. Zuckerberg's promised time and again to "do better," but failed to back it up. And again in this instance, in the same post in which he assures Facebook users action is imminent, he says the platform will continue to carry posts that are in the "public interest," even if those posts contravene its hate speech rules.

"Often, seeing speech from politicians is in the public interest, and in the same way that news outlets will report what a politician says, we think people should generally be able to see it for themselves on our platforms," Zuckerberg says.

By that definition, should Trump repeat one of his incendiary posts or issue an ad overtly villainizing a minority group, we can assume the "newsworthy" element will outweigh the public concern and risk posed by such speech. It isn't shocking, but it certainly is appalling that the creator of the biggest social network on earth continues to fail to understand the crux of the issue: Sensationalist content should not outweigh public safety, no matter how "newsworthy" it is.

The complete text of Zuckerberg’s post follows below:

Three weeks ago, I committed to reviewing our policies ahead of the 2020 elections. That work is ongoing, but today I want to share some new policies to connect people with authoritative information about voting, crack down on voter suppression, and fight hate speech.
The 2020 elections were already shaping up to be heated -- and that was before we all faced the additional complexities of voting during a pandemic and protests for racial justice across the country. During this moment, Facebook will take extra precautions to help everyone stay safe, stay informed, and ultimately use their voice where it matters most -- voting.
Many of the changes we're announcing today come directly from feedback from the civil rights community and reflect months of work with our civil rights auditors, led by noted civil rights and liberties expert Laura W. Murphy and Megan Cacace, a partner at the respected civil rights law firm of Relman & Colfax. Facebook stands for giving people a voice -- especially people who have previously not had as much voice or power to share their experiences.
1. Providing Authoritative Information on Voting During the Pandemic
Last week, we announced the largest voting information campaign in American history, with the goal of helping 4 million people register to vote. As part of this, we're creating a Voting Information Center to share authoritative information on how and when you can vote, including voter registration, voting by mail and early voting. During a pandemic when people may be afraid of going to polls, sharing authoritative information on voting by mail will be especially important. We'll be showing the Voting Information Center at the top of the Facebook and Instagram apps over the coming months.
In the midst of Covid, we're also focused on preventing new forms of potential voter suppression. For example, if someone says on Election Day that a city has been identified as a Covid hotspot, is that voter suppression or simply sharing health information? Because of the difficulty of judging this at scale, we are adopting a policy of attaching a link to our Voting Information Center for posts that discuss voting, including from politicians. This isn't a judgement of whether the posts themselves are accurate, but we want people to have access to authoritative information either way.
2. Additional Steps to Fight Voter Suppression
In 2018, we updated our policies to ban any content that misleads people on when or how they can vote. We're now tightening these policies to reflect the realities of the 2020 elections.
Since the most dangerous voter suppression campaigns can be local and run in the days immediately before an election, we're going to use our Elections Operations Center to quickly respond and remove false claims about polling conditions in the 72 hours leading into election day. Learning from our experience fighting Covid misinformation, we will partner with and rely on state election authorities to help determine the accuracy of information and what is potentially dangerous. We know this will be challenging in practice as facts on the ground may be uncertain and we don't want to remove accurate information about challenges people are experiencing, but we're building our operation to be able to respond quickly.
We will also ban posts that make false claims saying ICE agents checking for immigration papers at polling places, which is a tactic used to discourage voting. We'll also remove any threats of coordinated interference, like someone saying "My friends and I will be doing our own monitoring of the polls to make sure only the right people vote", which can be used to intimidate voters. We will continue to review our voter suppression policies on an ongoing basis as part of our work on voter engagement and racial justice.
3. Creating a Higher Standard for Hateful Content in Ads
This week's study from the EU showed that Facebook acts faster and removes a greater percent of hate speech on our services than other major internet platforms, including YouTube and Twitter. We've invested heavily in both AI systems and human review teams so that now we identify almost 90% of the hate speech we remove before anyone even reports it to us. We've also set the standard in our industry by publishing regular transparency reports so people can hold us accountable for progress. We will continue investing in this work and will commit whatever resources are necessary to improve our enforcement.
We believe there is a public interest in allowing a wider range of free expression in people's posts than in paid ads. We already restrict certain types of content in ads that we allow in regular posts, but we want to do more to prohibit the kind of divisive and inflammatory language that has been used to sow discord. So today we're prohibiting a wider category of hateful content in ads. Specifically, we're expanding our ads policy to prohibit claims that people from a specific race, ethnicity, national origin, religious affiliation, caste, sexual orientation, gender identity or immigration status are a threat to the physical safety, health or survival of others. We're also expanding our policies to better protect immigrants, migrants, refugees and asylum seekers from ads suggesting these groups are inferior or expressing contempt, dismissal or disgust directed at them.
4. Labeling Newsworthy Content
A handful of times a year, we leave up content that would otherwise violate our policies if the public interest value outweighs the risk of harm. Often, seeing speech from politicians is in the public interest, and in the same way that news outlets will report what a politician says, we think people should generally be able to see it for themselves on our platforms.
We will soon start labeling some of the content we leave up because it is deemed newsworthy, so people can know when this is the case. We'll allow people to share this content to condemn it, just like we do with other problematic content, because this is an important part of how we discuss what's acceptable in our society -- but we'll add a prompt to tell people that the content they're sharing may violate our policies.
To clarify one point: there is no newsworthiness exemption to content that incites violence or suppresses voting. Even if a politician or government official says it, if we determine that content may lead to violence or deprive people of their right to vote, we will take that content down. Similarly, there are no exceptions for politicians in any of the policies I'm announcing here today.
Overall, the policies we're implementing today are designed to address the reality of the challenges our country is facing and how they're showing up across our community. I'm committed to making sure Facebook remains a place where people can use their voice to discuss important issues, because I believe we can make more progress when we hear each other. But I also stand against hate, or anything that incites violence or suppresses voting, and we're committed to removing that no matter where it comes from.
We're continuing to review our policies, and we'll keep working with outside experts and civil rights organizations to adjust our approach as new risks emerge. I'm optimistic that we can make progress on public health and racial justice while maintaining our democratic traditions around free expression and voting. I'm committed to making sure Facebook is a force for good on this journey.