Culture

This is how Twitter plans to fix its biased algorithm

It’s calling the solution “Responsible Machine Learning,” and hopefully it’ll mean fewer questionable image crops.

Side view portrait of a sad black man complaining checking mobile phone sitting on a bench in a park
Shutterstock

Twitter is trying to improve users’ experience on its platform with its new Responsible Machine Learning initiative, director of software engineering Rumman Chowdhury and staff product manager of machine learning ethics Jutta Williams wrote in a blog post on Wednesday. The company-wide initiative is tasked with improving the way algorithmic decisions are made, mitigating bias in image cropping, and encouraging fairness and transparency around algorithmic influence on the average user’s experience.

Who gets to shape the algorithms? — Williams and Chowdhury write that the group involved in the initiative is "interdisciplinary" and takes into account the input of researchers, safety experts, engineers, data scientists, and other employees at Twitter.

Williams and Chowdhury also write that, in the near future, Twitter users will have access to the initiative's analyses on racial or gender bias in image cropping, what kind of timeline recommendations are sent to racial minorities, and insight into the political content recommendations in seven countries (neither of the authors specified which countries, but more information will doubtless be disclosed in the future).

If an algorithm is suspected of creating harm for users, engineers may rework or remove it. The same initiative will try to reduce bias in image cropping that appears to give preferential treatment to people with lighter-toned complexions. Responsible Machine Learning is also expected to give more control to users over their tweeted images by taking out racial bias from cropping.

You may not notice — "The results of this work may not always translate into visible product changes," Williams and Chowdhury write, "but it will lead to heightened awareness and important discussions around the way we build and apply machine learning." Of course, there’s understandable skepticism about this project given it’s being managed by Twitter itself, which has an onus to keep users on its service first and foremost and address problems with it second.

The announcement comes three months after riots broke at the Capitol in early January, leading to a company-wide discussion on how political extremists incite violence and organize on the platform. In the wake of the violent protests, lawmakers have urged Twitter, Facebook, YouTube, and other social media platforms to be more transparent about the kind of algorithms they use, their inherent biases, and have implored Big Tech to evaluate how these platforms inadvertently create potentially dangerous echo chambers.