Culture

AI experts warn deepfakes could lead to crimes like extortion and terrorism

They're hard to spot and even harder to prevent.

ALEXANDRA ROBINSON/AFP/Getty Images

It's an inconvenient truth: some of the best deepfake detectors don't even really work. And that's a headache for artificial intelligence experts, according to a new University College London study. Of the 20 different ways artificial intelligence can be used to exploit people, the study's authors worry that deepfakes used to commit crimes would be the most troublesome use of the technology.

For one, researchers warn, deepfakes are hard to spot and even more difficult to prevent. Plus, their impact can range from misleading voters to identity fraud and illicit payments, blackmail, or related crimes that enables.

Remember that Obama deepfake? — It's not just scholars worried about potential artificial intelligence crimes. Comedians, too, have brought it up. In 2018, Jordan Peele released a double-edged PSA featuring a deepfake Obama with Peele's highly believable voice in the background. It was creepy and effective.

ROBERT LEVER/AFP/Getty Images

A digital risk with real-world consequences — In 2019, a study found that only 10 percent of the adult population in the United States did not use the internet. The country makes up one of the most powerful, crowded, and consequential online markets and audiences on the globe. In other words, Americans are extremely online. With such a digitally connected demographic, the risks of being manipulated and misled by deepfakes is astronomically high and potentially devastating on political, social, legal, and ethical fronts.

This isn't lost on the researchers behind the deepfake study. Matthew Caldwell, one of the authors, said that a highly digital population is far more likely to be deceived by deepfakes. "Unlike many traditional crimes, crimes in the digital realm can be easily shared, repeated, and even sold, allowing criminal techniques to be marketed and for crime to be provided as a service," Caldwell said, according to The Next Web. "This means criminals may be able to outsource the more challenging aspects of their AI-based crime."

Oh, there's more — It isn't just deepfakes that could thwart information accuracy and sow discord among netizens. Authors of the study warn that artificial intelligence also poses the risks of manipulating driverless vehicles, boosting misinformation in the form of fake news, fake audio, increased phishing online, and even leading to accumulating mountains of personal online data that can be used in fraud, blackmail, and other forms of exploitation. If these issues aren't preemptively halted, we're in for some genuine chaos in the near future.