Culture

Facebook finally bans deepfakes (well, some of them)

The company announced the decision in a late-night blog post.

SOPA Images/LightRocket/Getty Images

Facebook is taking a stance on deepfakes and other forms of "manipulated media." Sort of. In a blog post published Monday night, the company outlined new criteria for handling digitally-altered media.

From now on, Facebook says it will remove videos that have been altered in a way that makes them indistinguishable from the real thing to the average viewer and thus carry the potential to "mislead someone into thinking that a subject of the video said words that they did not actually say."

More broadly, the company says it is cracking down on AI-altered videos that present false content as though it were authentic. There's an interesting omission, though: satirical content.

"This policy does not extend to content that is parody or satire," Facebook said, "or video that has been edited solely to omit or change the order of words."

Deepfakes have become a growing concern, particularly in relation to the upcoming U.S. election, and Facebook has so far remained pretty hands-off, to the ire of many. That's despite the fact that the company's own CEO, Mark Zuckerberg was the subject of one such video after it decided not to remove similarly falsified footage of House Speaker, Nancy Pelosi.

After many months of scathing criticism over its lax approach, Facebook has finally started taking steps to address the issue, this just being the latest. Last month, the company announced a partnership with global news agency Reuters to offer courses on deepfake-identification. And, apparently, there's more to come.

"As these partnerships and our own insights evolve, so too will our policies toward manipulated media," Facebook says. We'll see what that means.