Tech

Twitter asks researchers, ‘is our photo cropping tool racist?’

The company has been criticized by people who say its AI tool disproportionality excludes Black people when cropping photos.

BERLIN, GERMANY - MARCH 25: A member of the Bundestag wears protective gloves as she follows a Twitt...
Sean Gallup/Getty Images News/Getty Images

Twitter is offering a reward to any researcher who can identify bias in its algorithm for cropping images. The company earlier this year admitted that its algorithm, which tries to crop images to show the parts considered most “interesting,” tends to cut out the faces of Black people up to 8 percent more often than white individuals.

"Machine learning based cropping is fundamentally flawed because it removes user agency and restricts user's expression of their own identity and values, instead imposing a normative gaze about which part of the image is considered the most interesting," the company wrote in a May blog post. The racist nature of the photo cropping tool was first reported back in September 2020.

Twitter is offering rewards ranging from $500 to $3,500 for anyone who identifies biases in its algorithm. Winners will be announced at Def Con on August 8 and be invited to present their work at the popular hacker conference. The company released the code for the algorithm for hackers and researchers to dive into.

Algorithmic bias — By default, Twitter crops images so that they don’t take up too much space on a user’s screen. After revealing its findings of bias, the company updated its mobile app to begin showing photos in full to some users, without any crop. But Twitter still wants to figure out whether its algorithm is making biased decisions that favor certain demographics over others.

A lot of people tend to assume that algorithms cannot have prejudices because computer programs don’t have human feelings or emotions. They’re simply supposed to serve up the best possible answer to a query. But computer programs are created by humans, and they are the ones who train the algorithms on how best to weigh myriad factors. So they are fundamentally influenced by humans, flaws and all, and inevitably reflect their creators’ prejudices.

Inputs shape outputs — In a famous case reported by ProPublica, for instance, some judges around the United States were found using a program during probation hearings that would provide an estimate of the chance a felon would commit another crime if they were released. The algorithm was found to be biased because it frequently gave Black people a higher “risk score” than white people, even when the white person committed a more serious crime. Factors that were weighed in one algorithm it studied included the education level of a defendant and whether they had a job.

Algorithms are also hindered by the data they’re trained upon. Facial recognition algorithms have long been known to misidentify Black people disproportionately, which could be because there aren’t enough photos of Black people in the sets of data used to teach them how to recognize and differentiate people. Companies including Google have responded by sourcing more diverse data for their training.

Some companies have begun abandoning facial recognition tools over concerns of bias as well as its potential use in over-policing majority-Black areas.