Culture

Facebook let a politician's ad promoting shooting people run for days

The ad ran for five days and reached more than 50,000 people. It was only removed because a journalist emailed Facebook about it.

Facebook really never learns. As protests sparked by the murder of George Floyd continue across the United States, Facebook ran ads in Georgia that encouraged viewers to shoot protestors, as reported by Judd Legum in his newsletter Popular Information.

The original ad, which is still viewable on Facebook's Ad Library.Facebook Ad Library.

The minute-long ad stars — and was paid for by — Paul Broun, a Republican candidate for congress in the state of Georgia. In the ad, Broun wanders around a field shooting birds with an automatic rifle, waxing poetic about how it’s important to defend your property from “looting hordes from Atlanta” and the “tyrannical government from Washington.” To that effect, he states, he’s giving away a free AR-15, which he calls a “Liberty Machine.”

The ad ran for five days before Facebook finally removed it — and at that point, it was only because Legum had alerted the Facebook that the ad violated its own policies. By the time of its removal, the ad had been seen by more than 50,000 people, according to Facebook’s own Ad Library estimates.

The fact the ad contravenes Facebook’s own policies is beside the point, though. Even if Facebook’s policies did not speak to violent posts, allowing someone to pay for an ad that calls for shooting people would be egregious in any circumstance, on any platform. Any platform, that is, except one headed by Mark Zuckerberg, that is, who's made it abundantly clear how little he cares for ethical considerations.

Zuck isn’t budging — When Twitter decided to hide Trump’s tweets for inciting violence against protestors, Facebook took the opposite approach, leaving an identical post on the platform where it could be seen by billions of users. Zuckerberg has since defended that decision both publicly and in private Facebook meetings.

“After everything I’ve read and all the different folks that I’ve talked to, that reference is clearly to aggressive policing — maybe excessive policing — but has no history of being read as a dog whistle for vigilante supporters to take justice into their own hands,” Zuckerberg said in a recent meeting with Facebook employees.

Zuckerberg called the decision to leave the post up “pretty thorough.” Somehow that thoroughness led him right back to his tried, tested, and reviled hands-off stance.

Employees are not happy — Disgruntlement over Zuckerberg’s point of view on this matter is not limited to the public. Both former and current employees have taken the time to speak up about Facebook’s continued inaction. Current employees staged a walkout this week over Zuckerberg’s decisions; some have even resigned over the issue.

Earlier this week, former employees of the company felt strongly enough about the issue to publish an open letter to Zuckerberg in The New York Times. In the letter, the former employees call out Zuckerberg’s refusal to monitor truth, stating that the platform already does so all the time by fact-checking posts from non-politicians.

“Facebook now turns that goal on its head. It claims that providing warnings about a politician’s speech is inappropriate, but removing content from citizens is acceptable, even if both are saying the same thing. That is not a noble stand for freedom. It is incoherent, and worse, it is cowardly. Facebook should be holding politicians to a higher standard than their constituents.

Beyond censorship — The ongoing, very public debate between Facebook and other social media platforms centers mostly on fact-checking. Who is allowed to be — as Zuckerberg frequently puts is — an “arbiter of truth”?

But this argument goes well beyond “truth” and who can dictate it. Violence, and fatal violence at that, is not about facts or lies; it’s about violence. It’s about giving a platform to those who would like to kill other people, and encourage others to do so. And it’s about Facebook receiving money for advertising that message to the world.

Twitter understands this to some extent. The company is focusing on posts with the “highest potential for harm,” rather than for truth. It’s a meaningful shift in moderation that Facebook would do very well to learn from. It won't, of course.

Facebook’s arguments about why it fails to censor political figures, especially in advertisements, are entirely illogical. On a base level, this ad violates the platform’s policies about inciting violence — so why was it allowed to run in the first place? Why did it take a reporter’s email to have it removed? Because Facebook doesn't care about moderation. It doesn't care about its responsibilities as one of the most influential sources of information on earth. And it sure as hell doesn't care about the truth.