Play nice

Tinder will now ask you to reconsider sending offensive messages

Digital dating is hard enough without people being dicks to one another.

Tired overworked freelancer working with a smartphone and laptop in bed at bedroom.
simon2579/E+/Getty Images

Tinder is introducing a new feature that automatically double-checks with users before sending potentially offensive messages, in an attempt to cut down on hate and harassment on the popular dating app. The feature — aptly called Are You Sure? (AYS?) — will prompt users to think on that exact question when their message is judged to be rude or potentially harmful.

The AYS prompt has already shown promising results in early testing. Tinder says it’s reduced inappropriate language in messages on the app by “more than 10 percent” with early adopters. Users who saw the prompt were also less likely to be reported for inappropriate messages over the following month, Tinder says.

The new double-check feature is part of a larger movement by Tinder to utilize artificial intelligence in making its platform a safer place for its users. Beginning in January 2020, Tinder began introducing a suite of AI tools meant to warn users when they’re facing messages that might be offensive. That prompt — Does this bother you? — has increased the number of reports Tinder receives by 46 percent.

Chatting with strangers on the internet is inherently risky, and, of course, AI prompts aren’t going to get rid of all the bad actors on Tinder. But they might help reduce the hate users face on the app.

Just a nudge — The internet has made it easier than ever to communicate, but this comes with the delightful side effect of it being easier to spread hate speech and harassment, too. And while AI is relatively good at spotting these incidents now, it’s by no means perfect. There are going to be false positives — so it wouldn’t make sense to just outright block every message the AI deems hateful.

A nudge, then, is a decent middle ground. It forces the user to do a double-take and evaluate their message. If it’s a false flag, no harm done. If the message actually is harmful, a simple question — are you sure you’re saying what you mean and want it to sound like this? — might even teach users about the effect of their words and change their behavior moving forward.

Making the web safer — This type of little nudge is quickly becoming the de facto anti-harassment feature across the web. Instagram has been testing a similar nudge as part of its latest anti-abuse tool rollout, while Bumble now auto-flags messages that might be considered body-shaming. Even the chaos of Twitter has been somewhat mitigated as of late by warning users to double-check themselves before posting.

Tinder has had more than its fair share of problems in the recent past; it seems the company’s big bet on AI investment is helping things cool down. The new feature began rolling out this week. But you’re the sort of person who’ll never see it, right?