Tech

YouTube considered vetting all videos aimed at children

But didn’t for fear of cascading calls for accountability.

Thomas Trutschel/Photothek/Getty Images

YouTube reportedly put together a team of 40 people to manually screen all content aimed at viewers under the age of eight. The move came ahead of the Federal Trade Commission fining the video streaming company $170 million for illegally monitoring children’s use of the service. But before the screening initiative dubbed “Crosswalk” went live, execs pulled the plug, fearful that it would set a dangerous precedent.

Instead of policing the almost 500 hours of content uploaded to the platform every minute –– particularly the portion of it aimed at young viewers –– YouTube is putting the onus on creators to do so. According to Bloomberg, vetting content would make YouTube act more like a media company, beholden to rules of editorial integrity and the liability that comes with it. Never mind that some of the content uploaded to the service is so heinous it’s giving some moderators PTSD.

Creators are on the hook and out of pocket –– In November, YouTube told creators they’ll need to mark their child-focused content “made for kids” so it can be displayed on the company’s child-friendly splinter version of its app. Creators who fail to do so could be fined thousands of dollars if the government decides their content is still finding its way to children.

Meanwhile, “made for kids” content doesn’t enjoy the same paid-advertizing benefits as regular YouTube content, so creators who rely on that revenue may well find their income from the platform plummets, without any meaningful alternative service to switch to.

Bigger fines and mandatory moderation –– YouTube’s by no means the only big-tech player that wants to sidestep being an arbiter of truth. Facebook has long argued its merely a conduit for the free speech of others and shouldn’t be burdened with proving the veracity of content on its platform. Except, Facebook and YouTube far outstrip traditional media’s scale and reach.

Make the fines more punitive or make moderation mandatory by law –– that is, tether content oversight to profitability –– and companies like YouTube, Facebook and Twitter will find a way to solve problems like hate speech or misinformation, and fast.

As long as identifying and removing dangerous or reprehensible material remains costly and voluntary, though, platforms that rely on user-generated content and maximizing eyeballs on ads remain incentivized to dodge accountability by whatever means necessary. Which is only good news if you’re one of their lawyers.