Coronavirus

Ads containing coronavirus misinformation are still slipping through Facebook's moderation

Ads are falling through the cracks thanks to a diminished workforce and automation reliance.

Coronavirus panic concept. Dust respirator masks on the table and a person in blurred background. N9...
Shutterstock

Consumer Reports tested out Facebook’s claim that it’s doing its best to stop the spread of coronavirus misinformation and, go figure, it failed. A fake account was created along with several ads ranging from moderately to extremely dangerous information regarding COVID-19. With Facebook’s moderator staff down to a fraction of what it was and a deeper reliance on automated screening, it’s understandable the company isn’t catching everything. But this test was so obviously suspicious, it raises concerns about how Facebook is handling malicious ads.

What Facebook missed — The fake, nascent account had never posted anything before and used a rendering of the virus as its profile picture. These two factors alone should have at least raised some flags and the latter managed to get around Facebook’s image recognition software. That software managed to flag a photo of a respirator, but once the image was replaced with a similar one the ad was approved.

Coronavirus molecule.spawns/E+/Getty Images

The ads ranged from encouraging young people to go outside to recommending small daily doses of bleach to stay healthy. All seven ads were approved. Consumer Reports scheduled the ads, but never ran them. Though they would’ve likely been flagged once they started circulating, there’s no telling how many people would have seen the posts by that point.

Even CEO Mark Zuckerberg said in a recent press call “Our goal is to make it so that as much of the content as we take down as possible, our systems can identify proactively before people need to look at it at all.” He went on to say that by the time a user flags a post “a bunch of people have already been exposed to it, whereas if our AI systems can get it upfront, that’s obviously the ideal.”

Why Facebook’s tripping up — Though Facebook’s deep entity classification (DEC) algorithm is very good at removing fake accounts, the company doesn’t seem interested in using it in its misinformation battle. It has no issues, however, using AI to handle the bulk of the screening now that the human moderators are gone.

Facebook contracts out for its content moderators and sent them all home with pay. Some full-time employees have transitioned to moderation, but it’s unclear how qualified/trained this new worforce is. At only a fraction of the usual 15,000-strong moderation team, AI seems like a great tool — when it’s not incorrectly flagging posts as spam.

“Editorial review and curation would increase the price of ads overall, if they had that kind of pre-screening workforce,” Joan Donovan, a Harvard lecturer and misinformation researcher told Consumer Reports. “But the damage caused by not doing so can be deadly. It’s a flaw in the entire design of their advertising system.”