Culture

YouTube's coronavirus-induced moderation spree more than doubled video removals

The company says it had to compromise accuracy in some cases by relying on algorithms with human moderators out of the mix.

A screen depicting YouTube can be seen. One male character is sitting on top of the screen, another ...
Shutterstock

YouTube has upped its moderation tactics over the past few months since the COVID-19 pandemic hit the world. In its latest transparency report on August 25, the company noted that due to the pandemic, it had to send its human moderators home for safety and public health purposes. Those moderators were effectively replaced by the company's autofilter system, which resulted in twice as many video removals in the second quarter of 2020.

Unsurprising as this move is — with COVID-19 conspiracy videos and 5G hot takes littering the website — it could inadvertently infuriate content creators whose videos became subjects of this automated purge.

What YouTube says — Before the pandemic began, YouTube's conventional method of moderating took place with the help of human judgment. These moderators and the technology they used gave the company's moderation process a necessary dose of nuance and complexity. "Human review is not only necessary to train our machine learning systems, it also serves as a check, providing feedback that improves the accuracy of our systems over time," YouTube says.

With the pandemic, automated systems replaced human review and the company says it was "forced to make a choice between potential under-enforcement or potential over-enforcement." Doing so, it says it had to rely on lower levels of accuracy to "cast a wider net" even if it scenarios where perfectly legitimate videos would end up being removed.

Especially when it came to content involving violent extremism and child safety, YouTube says that it chose to opt for higher quantity strikes even if it meant that the quality of these checks was compromised. This is particularly understandable when human moderators are subtracted from the review process. Removing sensitive material involving child safety — even if the video could be about encouraging safety for children on the internet, for instance — follows a simple and rather sympathetic logic: it's better to be safe than sorry.

Don't worry, YouTubers — Because YouTube had to amplify its autofilter in the absence of human content moderators, it says that it attempted to make the appeal process a little easier for content creators. This means that if you feel as if your video was unfairly targeted, you will have a chance to appeal the decision. Naturally, per YouTube, the number of appeals and reinstatement rate of videos has doubled since its former quarter.

Various networks like Twitter, Facebook, and YouTube have attempted to increase their automated filter processes in the wake of COVID-19 and subsequently decreased moderator workforce. None of the approaches are perfect. If it shows us anything, it's that automated filtering and moderation will never work without the help of human intelligence.