Culture

Telegram still hasn’t taken action against that deepfake nude bot

More than 100,000 people victimized by one bot. The company's solution is apparently to ignore the problem and hope it goes away on its own.

urbazon/E+/Getty Images

Telegram, one of the most popular messaging apps that offers end-to-end encryption, is home to more than 400 million monthly active users. It also houses some truly disturbing horrors — like an AI-powered bot that’s creating and distributing deepfake nudes of women. To make matters much worse, Telegram still hasn’t banned the bot, despite knowing about it for many months.

In a report released last month, cybersecurity firm Sensity found that more than 100,000 women had already been targeted by the deepfake bot. Use of the bot had shot up almost 200 percent in the month preceding the report; at least 70 percent of the targeted individuals had their images taken from social media.

Sensity first reported the issue to Telegram when it saw the bot’s activity at the beginning of 2020. Still nothing has been done to remove the bot from Telegram’s servers. The company hasn’t even publicly acknowledged the existence of this bot. It’s such a damning look for Telegram that it would be laughable if its consequences weren’t so awful.

Some minor progress, though — It’s unclear why Telegram has failed to take direct action against this explicitly abusive bot. The company did reportedly blocked direct access to the bot — attempting to do so now displays a generic message that it cannot be displayed — but the bot itself is still active and can be accessed from Android devices and Telegram’s Mac app.

Nonetheless, Sensity’s report has brought attention to the bot’s activities, and that alone seems to have quieted down some of those posting the fake nudes. The groups advertising the bot have gone mostly silent, Sensity CEO Giorgio Patrini told Wired, and the bot’s owner wiped a public gallery of the deepfake nudes.

Apple also blocked the bot on iOS for violating App Store guidelines. That’s more of a bandage (and a flimsy one at that) than a solution, but it’s certainly better than no progress at all.

Nothing new, still very bad — Telegram has long prided itself on being a private platform where free speech is championed above all else. This business model has proven, as you might expect, difficult to moderate. It’s become something of a safe haven for radical groups worried they’ll get the boot from Facebook.

The list of extremist groups known to use Telegram for organizing is pretty exhaustive. Most recently, it’s been known as a hangout for the radical pro-Trump group known as the Proud Boys; then there are the other Neo-Nazis and fascist hate groups and, yes, even ISIS has used Telegram for its activities.

And bound to get worse — This is the first known instance of deepfake abuse at such a large scale, but Sensity says it’s only the beginning. It’s already pretty easy for the average user to create and use a bot, and the process is only becoming simpler with each passing day. If Telegram doesn’t take firm control of the situation soon, either with policy changes or more robust moderation, it’s only going to keep getting worse.

Wired reached out to Telegram’s press team and its founder, Pavel Durov; neither responded to its inquiries. Seems like Telegram is fine with its image being forever tainted by this kind of activity, as long as the app continues to be profitable.