Tech

Twitter applauds itself for basic content moderation in petty followup to Big Tech hearing

The company says, of course, it shouldn't face regulation because it's already doing enough to combat hate speech and misinformation.

Protestors hold up sign about violent Twitter content.
PHILIP PACHECO/AFP/Getty Images

Despite its outsized influence on politics, Twitter was noticeably absent from yesterday's antitrust hearings evaluating the power of major tech platforms. That's probably good considering it was a disjointed mess that erratically bounced from one topic to another without diving deep on anything. But it also means we didn't get to see CEO Jack Dorsey face unexpected lines of questioning, nor did we see the internal documents that have since been released revealing the true motivations of executives at other tech companies.

Instead, Twitter used the comfort of its own platform to argue, essentially, that it's just the little guy, and any legislation shouldn't harm its ability to continue moderating content on its platform as it wishes.

'Debate me,' Twitter says — The thread from its @Policy account focuses a lot of time on the many initiatives Twitter has launched regarding content moderation, particularly how the company now provides "context" surrounding questionable tweets rather than pulling them down altogether. Twitter recently began labeling tweets that contain misinformation, including from President Trump, embedding links alongside them where users can "Get the facts" about the topics discussed from a variety of bipartisan sources. Pulling questionable tweets would, Twitter says, "dilute the public's right to express dissent or engage debate in response."

This is all self-serving — It's the same argument Twitter has long made, but the way it made it feels petty. Twitter is using its own platform to release prepared statements defending itself and attacking other tech companies the day after they faced a real-time grilling.

It's also questionable how effective Twitter's moves have really been. It's been better than Facebook, but putting a "Get the facts" link against Trump's tweets is useful insofar as users choose to click it and participate in such debate. That's expecting a lot from most people who remain in their echo chambers. Ironically this is what Twitter is doing, hiding behind its platform rather than actually debating in public.

When it comes to safety, the company is now suspending accounts that tweet links to hateful content. But it's notoriously slow at catching this type of content and unlike Facebook allows anyone to tweet anonymously in the interest of free speech.

There's likely some truth to the idea that regulating how platforms moderate content could have negative ramifications. Section 230 in the U.S. allows them to make choices of what stays up without violating anyone's free speech or bringing liablity onto themselves if a post leads to harm. Experts say that if 230 were revoked or amended, tech platforms would likely have to make a choice: Delete everything that could potentially be deemed false or dangerous, or simply don't moderate their platforms at all.

Twitter's comments about open debate still seem a bit self-serving since engagement is important to the company and deleting content from Trump or blocking anonymous users could lead to an exodus to greener pastures. Also, greater moderation would cost money, which platforms like Facebook and Google have much more of.

But the tech platforms created these problems in the first place. Platforms like broadcast TV don't struggle with this nearly to the same extent because they moderated commercials and shows from day one. Unchecked, unfettered content from anonymous sources can come with serious consequences as we now know.

We need the receipts — It'd be much more useful if we could "Get the facts," about Twitter's statements here. Which is to say, we should be getting the internal documents that Congress has revealed shedding light on why Apple and others have made certain decisions. Like how the late Steve Jobs ordered his company to cut off a developer who was making critical comments about Apple in the press. Such comments add to concerns that companies with outsized control over major platforms are using that power to unfairly advance their interests.

The question is, does Twitter really have the public good of fostering healthy debate in mind, or does it caution against regulation purely out of financial interest?