Tech

Even the best deepfake detector only works 65% of the time

There's much work to be done to combat the sneakiest form of deception.

The Washington Post/The Washington Post/Getty Images

Deepfakes — videos that mimic a real person's likeness with uncanny accuracy — are incredibly hard to spot, and as the tools to make them get better, they risk worsening the scale of misinformation online. Facebook put out a call for companies to build deepfake-spotting tools, and the results are in. They're not comforting. The best performer was only able to achieve an accuracy rate of 65 percent.

Last year, Facebook announced a Deepfake Detection Challenge (DFDC) with prizes totaling $1 million for the algorithms that could correctly flag the highest number of fake videos from a provided set of 115,000. Contestants trained their algorithms using the dataset and then had to test them against a private set of videos held by Facebook. You don't have to be good at math to work out the winner could only identify just under two-thirds of the fakes. With the amount of content uploaded to YouTube every day, that remaining third is very worrying.

The winning submission came from a developer named Selim Seferbekov, and show how much work there is ahead if Facebook — or any other platform for that matter — is to be able to consistently recognize (and, possibly, remove) artificial content.

Video is taken as fact — As a society, we've come to see video as a documentary and witnessing tool, but deepfakes completely flip that on its head and make it both difficult to judge our senses, and easier for people to claim information constitutes "fake news."

One way experts hope to combat this new technology is by developing detection algorithms that can keep abreast of deepfake technology itself. OpenAI, for instance, is developing technology to mimic the singing voices of famous artists while simultaneously building programs to detect such forgeries. The thinking is that the best people to create an antidote to deepfakes are the people who make the deepfake technology in the first place.

One of the best-known examples of a deepfake.

Cat and mouse — Machine learning algorithms aren't like humans — they're trained on samples of video, but if some new characteristic is dropped into a video that the algorithm hasn't seen before, it could give up and throw its hands in the air (metaphorically, of course). Right now the algorithms are too rigid and look for specific signals to spot a fake when they need to be more holistic.

Use your head, kid — What this tells us is that the best protection against misinformation remains a truly human trait: Being media savvy. If you see a video of former President Obama making wildly inflammatory comments and calling for the death of all Kenyans, you should stop and ask yourself: Where did this video come from? Has it been corroborated by other outlets? Has the subject of the video made any further statements?

You should ask these questions because clearly the detection technology isn't ready yet. Hopefully, it will come as soon as possible, because even the demographic that's the most online — teenagers — can be tripped up by fake news. And if the teens can't spot the fakes, what hope is there for the rest of us?