Culture

Is Twitter’s image-cropping algorithm racially biased?

Users have been pointing out that Twitter often chooses to focus image crops on white people. Time for Twitter to do some research.

Face ID, facial recognition, biometric identification, personal verification, cyber protection, iden...
Shutterstock

Twitter is notoriously bad at cropping images. The platform’s in-stream images are cropped wide using complex algorithms, leading to some unfortunate situations where the subject of an image is left unseen.

Perhaps the most egregious cropping mistakes center on how Twitter’s algorithms view people — and we’re not just talking about the focus being on your chin instead of your eyes in a selfie. It seems Twitter’s image-cropping algorithm favors white people. Not just every once in a while, either; the algorithm chooses to crop images to focus on white people almost every time.

Quite a few people demonstrated the problem this weekend by tweeting images containing both white people and people of color. In almost every instance, Twitter’s algorithms chose to display the white face, even when that face didn't constitute the majority of the image.

Twitter seemed just as taken aback by the bias as its users. Employees from the company took to the network this weekend to express their concern over the issue and their dedication to finding a solution as soon as possible.

What Twitter says — While the company has not made any official statements on the matter, its employees have been paying attention to the drama.

Liz Kelley, a member of Twitter’s communications team, quote-retweeted one of the “experiments” and confirmed that the company had indeed done bias testing before shipping its new image algorithm; no racial or gender bias was found in that testing.

Twitter’s chief design officer, Dantley Davis, also tweeted about the bias, stating the company is “still investigating” the neural networks used for cropping.

CTO Parag Agrawal also acknowledged the problem. “To address it, we did analysis on our model when we shipped it, but needs continuous improvement,” he tweeted. He says he is “eager to learn” from the experience.

Sometimes it’s fine, though — The so-called experiment works in many cases. Everyone from prominent political figures to fictional characters are affected by Twitter’s image-cropping algorithms.

But this is not always the case. Some users’ “experiments” — especially the more in-depth ones — proved less conclusive. It’s not that Twitter always crops images to feature white people more prominently — but it does happen fairly often, and it’s machine learning we have to blame.

Why is this happening? — At the base of this discussion is the ongoing problem of algorithmic bias. Though it sounds counterintuitive, computer models are always biased, thanks to their creators. Bias in the creators — implicit or explicit — tends to work its way into the algorithms they create.

Twitter’s algorithms use saliency detection to approximate the most important parts of a given image. But saliency is, like everything else, subjective. The algorithms used to detect saliency were trained by humans, who are implicitly biased, which means the algorithms end up recreating those biases, too.

Some users have suggested that part of the problem stems from how the algorithm interprets the brightness of an image’s background, but that hasn't been conclusively demonstrated either.

It’s impossible, at this point at least, to remove 100 percent of bias from our algorithms. They’re learning from us, after all. But Twitter definitely has some improvements to make on this one — the acknowledgment that there's a problem is a promising start.