I recently read "The Greatest Propaganda Machine in History," an essay by Sacha Baron Cohen featured in James Steyer's "Which Side of History?" and was surprised to see him argue against the way Section 230 defends free speech. It seemed to me that he didn’t see the real issue, and it got me thinking: Is there really any way to safely change one of the most important pieces of legislation governing the internet?
Section 230 was enacted as part of the Communications Decency Act of 1996 and protects online platforms like social media sites from being held responsible for the content their users post. It also gives them the ability to moderate content on their sites but doesn’t force them to use it. Essentially, it protects the right of internet users to say what they want, with the knowledge that it might get taken down.
I agree with Cohen when he says that free speech is good, and that people should not be subject to targeting, harassment and murder simply for being who they are. But the reason he wants Section 230 to see reform is that he sees the content posted on social media sites as expressions of the opinions of the sites themselves and therefore wants the sites to be punished for libelous posts.
"Publishers can be sued for libel," he said in the essay. "But social media companies are largely protected from liability for the content their users post — no matter how indecent it is — by Section 230 of, get ready for it, the Communications Decency Act. Absurd!"
What Cohen doesn't acknowledge is that social media companies are nothing like publishing houses at all. They are free. Companies like YouTube place a soap box in a public forum — the internet — and allow anyone in the world to step up and speak. That transparency has innate value.
Cohen wants social media companies to be forced into censorship roles and to be punished for letting unmoderated content slip through the cracks. What he misses is that this demand doesn’t target the companies. If forced content moderation became a legal reality, the real victims would become obvious right away as companies would have to find thousands of people to sift through the dross and pick posts to take down. If bots were employed to do this job instead, there’s no telling what they would do or who would be silenced. Moderation is already a dangerous game, so why make it more so?
Social media companies do not own the content users post on their sites, so they do not have the same responsibilities as news companies and publishing houses, but the danger is instead in the algorithms put in place by these companies. No, TikTok may not be actively creating material to radicalize the next school shooters, but a site doesn’t have to create content to have a fault. It merely has to make dangerous decisions about how to disseminate other people’s content.
TikTok is like a restaurant that watches what its patrons order, and then uses that information to force them into eating the same meal over and over again. The menu is there at the beginning of the meal, and the customers may choose what they want, but if they come back, they will be served the same food again whether they liked it last time or not.
Soon, the issue of free speech on the internet is going to come into the public eye, and it shouldn't be a partisan issue. Presidents Obama, Trump and Biden have all called for reforms to Section 230, and reform legislation is currently making its way through Congress. If we remember that the algorithms put in place by these companies are the problem and not the number of posts they take down, we may be able to shift our focus away from tearing down Section 230 and onto legislation that actually helps the American people. And who knows? We might be able to make a lot of people happy.
Dominic Solomito (he/him) is a junior studying journalism.